00:00:00.001 Started by upstream project "autotest-per-patch" build number 132703 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.083 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.085 The recommended git tool is: git 00:00:00.085 using credential 00000000-0000-0000-0000-000000000002 00:00:00.087 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.140 Fetching changes from the remote Git repository 00:00:00.142 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.202 Using shallow fetch with depth 1 00:00:00.202 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.202 > git --version # timeout=10 00:00:00.260 > git --version # 'git version 2.39.2' 00:00:00.260 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.296 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.296 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.200 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.216 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.230 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.230 > git config core.sparsecheckout # timeout=10 00:00:07.242 > git read-tree -mu HEAD # timeout=10 00:00:07.263 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.287 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.288 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.387 [Pipeline] Start of Pipeline 00:00:07.403 [Pipeline] library 00:00:07.405 Loading library shm_lib@master 00:00:07.405 Library shm_lib@master is cached. Copying from home. 00:00:07.474 [Pipeline] node 00:00:07.491 Running on WFP16 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.492 [Pipeline] { 00:00:07.500 [Pipeline] catchError 00:00:07.502 [Pipeline] { 00:00:07.513 [Pipeline] wrap 00:00:07.519 [Pipeline] { 00:00:07.524 [Pipeline] stage 00:00:07.526 [Pipeline] { (Prologue) 00:00:07.771 [Pipeline] sh 00:00:08.558 + logger -p user.info -t JENKINS-CI 00:00:08.586 [Pipeline] echo 00:00:08.587 Node: WFP16 00:00:08.593 [Pipeline] sh 00:00:08.976 [Pipeline] setCustomBuildProperty 00:00:08.988 [Pipeline] echo 00:00:08.989 Cleanup processes 00:00:08.995 [Pipeline] sh 00:00:09.285 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.286 69974 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.297 [Pipeline] sh 00:00:09.589 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.589 ++ grep -v 'sudo pgrep' 00:00:09.589 ++ awk '{print $1}' 00:00:09.589 + sudo kill -9 00:00:09.589 + true 00:00:09.603 [Pipeline] cleanWs 00:00:09.612 [WS-CLEANUP] Deleting project workspace... 00:00:09.612 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.622 [WS-CLEANUP] done 00:00:09.627 [Pipeline] setCustomBuildProperty 00:00:09.661 [Pipeline] sh 00:00:09.948 + sudo git config --global --replace-all safe.directory '*' 00:00:10.033 [Pipeline] httpRequest 00:00:11.810 [Pipeline] echo 00:00:11.812 Sorcerer 10.211.164.20 is alive 00:00:11.832 [Pipeline] retry 00:00:11.845 [Pipeline] { 00:00:11.859 [Pipeline] httpRequest 00:00:11.863 HttpMethod: GET 00:00:11.863 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.864 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.872 Response Code: HTTP/1.1 200 OK 00:00:11.872 Success: Status code 200 is in the accepted range: 200,404 00:00:11.873 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:29.800 [Pipeline] } 00:00:29.823 [Pipeline] // retry 00:00:29.841 [Pipeline] sh 00:00:30.131 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:30.149 [Pipeline] httpRequest 00:00:30.694 [Pipeline] echo 00:00:30.696 Sorcerer 10.211.164.20 is alive 00:00:30.707 [Pipeline] retry 00:00:30.709 [Pipeline] { 00:00:30.724 [Pipeline] httpRequest 00:00:30.730 HttpMethod: GET 00:00:30.730 URL: http://10.211.164.20/packages/spdk_98eca6fa083aaf48dc253cd326ac15e635bc4141.tar.gz 00:00:30.731 Sending request to url: http://10.211.164.20/packages/spdk_98eca6fa083aaf48dc253cd326ac15e635bc4141.tar.gz 00:00:30.739 Response Code: HTTP/1.1 200 OK 00:00:30.739 Success: Status code 200 is in the accepted range: 200,404 00:00:30.740 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_98eca6fa083aaf48dc253cd326ac15e635bc4141.tar.gz 00:03:24.821 [Pipeline] } 00:03:24.839 [Pipeline] // retry 00:03:24.847 [Pipeline] sh 00:03:25.134 + tar --no-same-owner -xf spdk_98eca6fa083aaf48dc253cd326ac15e635bc4141.tar.gz 00:03:27.711 [Pipeline] sh 00:03:27.998 + git -C spdk log --oneline -n5 00:03:27.998 98eca6fa0 lib/thread: Add API to register a post poller handler 00:03:27.998 2c140f58f nvme/rdma: Support accel sequence 00:03:27.998 8d3947977 spdk_dd: simplify `io_uring_peek_cqe` return code processing 00:03:27.998 77ee034c7 bdev/nvme: Add lock to unprotected operations around attach controller 00:03:27.998 48454bb28 bdev/nvme: Add lock to unprotected operations around detach controller 00:03:28.011 [Pipeline] } 00:03:28.026 [Pipeline] // stage 00:03:28.035 [Pipeline] stage 00:03:28.038 [Pipeline] { (Prepare) 00:03:28.055 [Pipeline] writeFile 00:03:28.073 [Pipeline] sh 00:03:28.360 + logger -p user.info -t JENKINS-CI 00:03:28.374 [Pipeline] sh 00:03:28.661 + logger -p user.info -t JENKINS-CI 00:03:28.674 [Pipeline] sh 00:03:28.961 + cat autorun-spdk.conf 00:03:28.961 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:28.961 SPDK_TEST_NVMF=1 00:03:28.961 SPDK_TEST_NVME_CLI=1 00:03:28.961 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:28.961 SPDK_TEST_NVMF_NICS=e810 00:03:28.961 SPDK_TEST_VFIOUSER=1 00:03:28.961 SPDK_RUN_UBSAN=1 00:03:28.961 NET_TYPE=phy 00:03:28.969 RUN_NIGHTLY=0 00:03:28.984 [Pipeline] readFile 00:03:29.035 [Pipeline] withEnv 00:03:29.039 [Pipeline] { 00:03:29.051 [Pipeline] sh 00:03:29.339 + set -ex 00:03:29.339 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:03:29.339 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:29.339 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:29.339 ++ SPDK_TEST_NVMF=1 00:03:29.339 ++ SPDK_TEST_NVME_CLI=1 00:03:29.339 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:29.339 ++ SPDK_TEST_NVMF_NICS=e810 00:03:29.339 ++ SPDK_TEST_VFIOUSER=1 00:03:29.339 ++ SPDK_RUN_UBSAN=1 00:03:29.339 ++ NET_TYPE=phy 00:03:29.339 ++ RUN_NIGHTLY=0 00:03:29.339 + case $SPDK_TEST_NVMF_NICS in 00:03:29.339 + DRIVERS=ice 00:03:29.339 + [[ tcp == \r\d\m\a ]] 00:03:29.339 + [[ -n ice ]] 00:03:29.339 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:29.339 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:35.920 rmmod: ERROR: Module irdma is not currently loaded 00:03:35.920 rmmod: ERROR: Module i40iw is not currently loaded 00:03:35.920 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:35.920 + true 00:03:35.920 + for D in $DRIVERS 00:03:35.920 + sudo modprobe ice 00:03:35.920 + exit 0 00:03:35.930 [Pipeline] } 00:03:35.944 [Pipeline] // withEnv 00:03:35.949 [Pipeline] } 00:03:35.963 [Pipeline] // stage 00:03:35.973 [Pipeline] catchError 00:03:35.975 [Pipeline] { 00:03:35.988 [Pipeline] timeout 00:03:35.988 Timeout set to expire in 1 hr 0 min 00:03:35.990 [Pipeline] { 00:03:36.004 [Pipeline] stage 00:03:36.006 [Pipeline] { (Tests) 00:03:36.021 [Pipeline] sh 00:03:36.309 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:36.309 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:36.309 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:36.309 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:03:36.309 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:36.309 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:36.309 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:03:36.309 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:36.309 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:36.309 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:36.309 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:03:36.309 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:36.309 + source /etc/os-release 00:03:36.309 ++ NAME='Fedora Linux' 00:03:36.309 ++ VERSION='39 (Cloud Edition)' 00:03:36.309 ++ ID=fedora 00:03:36.309 ++ VERSION_ID=39 00:03:36.309 ++ VERSION_CODENAME= 00:03:36.309 ++ PLATFORM_ID=platform:f39 00:03:36.309 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:36.309 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:36.309 ++ LOGO=fedora-logo-icon 00:03:36.309 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:36.309 ++ HOME_URL=https://fedoraproject.org/ 00:03:36.309 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:36.309 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:36.309 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:36.309 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:36.309 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:36.309 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:36.309 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:36.309 ++ SUPPORT_END=2024-11-12 00:03:36.309 ++ VARIANT='Cloud Edition' 00:03:36.309 ++ VARIANT_ID=cloud 00:03:36.309 + uname -a 00:03:36.309 Linux spdk-wfp-16 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:36.309 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:38.854 Hugepages 00:03:38.854 node hugesize free / total 00:03:38.854 node0 1048576kB 0 / 0 00:03:38.854 node0 2048kB 0 / 0 00:03:38.854 node1 1048576kB 0 / 0 00:03:38.854 node1 2048kB 0 / 0 00:03:38.854 00:03:38.854 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:38.854 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:38.854 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:38.854 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:38.854 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:38.854 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:38.854 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:38.854 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:38.854 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:38.854 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:38.854 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:38.854 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:38.854 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:38.854 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:38.854 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:38.854 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:38.854 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:38.854 NVMe 0000:86:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:38.854 + rm -f /tmp/spdk-ld-path 00:03:38.854 + source autorun-spdk.conf 00:03:38.854 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:38.854 ++ SPDK_TEST_NVMF=1 00:03:38.854 ++ SPDK_TEST_NVME_CLI=1 00:03:38.854 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:38.854 ++ SPDK_TEST_NVMF_NICS=e810 00:03:38.854 ++ SPDK_TEST_VFIOUSER=1 00:03:38.854 ++ SPDK_RUN_UBSAN=1 00:03:38.854 ++ NET_TYPE=phy 00:03:38.854 ++ RUN_NIGHTLY=0 00:03:38.854 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:38.854 + [[ -n '' ]] 00:03:38.854 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:38.854 + for M in /var/spdk/build-*-manifest.txt 00:03:38.854 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:38.854 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:38.854 + for M in /var/spdk/build-*-manifest.txt 00:03:38.854 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:38.854 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:38.854 + for M in /var/spdk/build-*-manifest.txt 00:03:38.854 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:38.854 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:38.854 ++ uname 00:03:38.854 + [[ Linux == \L\i\n\u\x ]] 00:03:38.854 + sudo dmesg -T 00:03:38.854 + sudo dmesg --clear 00:03:38.854 + dmesg_pid=71415 00:03:38.855 + [[ Fedora Linux == FreeBSD ]] 00:03:38.855 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:38.855 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:38.855 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:38.855 + sudo dmesg -Tw 00:03:38.855 + [[ -x /usr/src/fio-static/fio ]] 00:03:38.855 + export FIO_BIN=/usr/src/fio-static/fio 00:03:38.855 + FIO_BIN=/usr/src/fio-static/fio 00:03:38.855 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:38.855 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:38.855 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:38.855 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:38.855 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:38.855 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:38.855 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:38.855 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:38.855 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:39.115 20:23:32 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:39.115 20:23:32 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:39.115 20:23:32 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:39.115 20:23:32 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:03:39.115 20:23:32 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:03:39.115 20:23:32 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:39.115 20:23:32 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:03:39.115 20:23:32 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:03:39.115 20:23:32 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:03:39.115 20:23:32 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:03:39.115 20:23:32 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:03:39.115 20:23:32 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:39.115 20:23:32 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:39.115 20:23:32 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:39.115 20:23:32 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:39.115 20:23:32 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:39.115 20:23:32 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:39.115 20:23:32 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:39.115 20:23:32 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:39.116 20:23:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.116 20:23:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.116 20:23:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.116 20:23:32 -- paths/export.sh@5 -- $ export PATH 00:03:39.116 20:23:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.116 20:23:32 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:39.116 20:23:32 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:39.116 20:23:32 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733426612.XXXXXX 00:03:39.116 20:23:32 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733426612.HGgoK9 00:03:39.116 20:23:32 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:39.116 20:23:32 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:39.116 20:23:32 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:03:39.116 20:23:32 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:39.116 20:23:32 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:39.116 20:23:32 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:39.116 20:23:32 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:39.116 20:23:32 -- common/autotest_common.sh@10 -- $ set +x 00:03:39.116 20:23:32 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:03:39.116 20:23:32 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:39.116 20:23:32 -- pm/common@17 -- $ local monitor 00:03:39.116 20:23:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:39.116 20:23:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:39.116 20:23:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:39.116 20:23:32 -- pm/common@21 -- $ date +%s 00:03:39.116 20:23:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:39.116 20:23:32 -- pm/common@21 -- $ date +%s 00:03:39.116 20:23:32 -- pm/common@25 -- $ sleep 1 00:03:39.116 20:23:32 -- pm/common@21 -- $ date +%s 00:03:39.116 20:23:32 -- pm/common@21 -- $ date +%s 00:03:39.116 20:23:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733426612 00:03:39.116 20:23:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733426612 00:03:39.116 20:23:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733426612 00:03:39.116 20:23:32 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733426612 00:03:39.116 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733426612_collect-vmstat.pm.log 00:03:39.116 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733426612_collect-cpu-load.pm.log 00:03:39.116 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733426612_collect-cpu-temp.pm.log 00:03:39.116 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733426612_collect-bmc-pm.bmc.pm.log 00:03:40.058 20:23:33 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:40.058 20:23:33 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:40.058 20:23:33 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:40.058 20:23:33 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:40.058 20:23:33 -- spdk/autobuild.sh@16 -- $ date -u 00:03:40.058 Thu Dec 5 07:23:33 PM UTC 2024 00:03:40.058 20:23:33 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:40.058 v25.01-pre-298-g98eca6fa0 00:03:40.058 20:23:33 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:40.058 20:23:33 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:40.058 20:23:33 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:40.058 20:23:33 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:40.058 20:23:33 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:40.058 20:23:33 -- common/autotest_common.sh@10 -- $ set +x 00:03:40.319 ************************************ 00:03:40.319 START TEST ubsan 00:03:40.319 ************************************ 00:03:40.319 20:23:33 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:40.319 using ubsan 00:03:40.319 00:03:40.319 real 0m0.000s 00:03:40.319 user 0m0.000s 00:03:40.319 sys 0m0.000s 00:03:40.319 20:23:33 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:40.319 20:23:33 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:40.319 ************************************ 00:03:40.319 END TEST ubsan 00:03:40.319 ************************************ 00:03:40.319 20:23:33 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:40.319 20:23:33 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:40.319 20:23:33 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:40.319 20:23:33 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:40.319 20:23:33 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:40.319 20:23:33 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:40.319 20:23:33 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:40.319 20:23:33 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:40.319 20:23:33 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:03:40.890 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:40.890 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:41.460 Using 'verbs' RDMA provider 00:03:57.314 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:09.538 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:09.538 Creating mk/config.mk...done. 00:04:09.538 Creating mk/cc.flags.mk...done. 00:04:09.538 Type 'make' to build. 00:04:09.538 20:24:01 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:04:09.538 20:24:01 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:09.538 20:24:01 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:09.538 20:24:01 -- common/autotest_common.sh@10 -- $ set +x 00:04:09.538 ************************************ 00:04:09.538 START TEST make 00:04:09.538 ************************************ 00:04:09.538 20:24:01 make -- common/autotest_common.sh@1129 -- $ make -j112 00:04:09.538 make[1]: Nothing to be done for 'all'. 00:04:10.919 The Meson build system 00:04:10.919 Version: 1.5.0 00:04:10.919 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:04:10.919 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:10.919 Build type: native build 00:04:10.919 Project name: libvfio-user 00:04:10.919 Project version: 0.0.1 00:04:10.919 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:10.919 C linker for the host machine: cc ld.bfd 2.40-14 00:04:10.919 Host machine cpu family: x86_64 00:04:10.919 Host machine cpu: x86_64 00:04:10.919 Run-time dependency threads found: YES 00:04:10.919 Library dl found: YES 00:04:10.919 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:10.919 Run-time dependency json-c found: YES 0.17 00:04:10.919 Run-time dependency cmocka found: YES 1.1.7 00:04:10.919 Program pytest-3 found: NO 00:04:10.919 Program flake8 found: NO 00:04:10.919 Program misspell-fixer found: NO 00:04:10.919 Program restructuredtext-lint found: NO 00:04:10.919 Program valgrind found: YES (/usr/bin/valgrind) 00:04:10.919 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:10.919 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:10.919 Compiler for C supports arguments -Wwrite-strings: YES 00:04:10.919 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:10.919 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:04:10.919 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:04:10.919 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:10.919 Build targets in project: 8 00:04:10.919 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:10.919 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:10.919 00:04:10.919 libvfio-user 0.0.1 00:04:10.919 00:04:10.919 User defined options 00:04:10.919 buildtype : debug 00:04:10.919 default_library: shared 00:04:10.919 libdir : /usr/local/lib 00:04:10.919 00:04:10.919 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:11.178 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:11.178 [1/37] Compiling C object samples/null.p/null.c.o 00:04:11.178 [2/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:11.178 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:11.179 [4/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:11.179 [5/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:11.179 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:11.179 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:11.179 [8/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:11.179 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:11.179 [10/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:11.179 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:11.179 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:11.179 [13/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:11.179 [14/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:11.179 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:11.179 [16/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:11.179 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:11.179 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:11.179 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:11.179 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:11.179 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:11.179 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:11.179 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:11.179 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:11.179 [25/37] Compiling C object samples/server.p/server.c.o 00:04:11.179 [26/37] Compiling C object samples/client.p/client.c.o 00:04:11.179 [27/37] Linking target samples/client 00:04:11.438 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:11.438 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:11.438 [30/37] Linking target test/unit_tests 00:04:11.438 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:04:11.438 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:11.438 [33/37] Linking target samples/lspci 00:04:11.438 [34/37] Linking target samples/null 00:04:11.438 [35/37] Linking target samples/gpio-pci-idio-16 00:04:11.438 [36/37] Linking target samples/server 00:04:11.438 [37/37] Linking target samples/shadow_ioeventfd_server 00:04:11.695 INFO: autodetecting backend as ninja 00:04:11.695 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:11.695 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:11.961 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:11.961 ninja: no work to do. 00:04:17.321 The Meson build system 00:04:17.321 Version: 1.5.0 00:04:17.321 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:04:17.321 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:04:17.321 Build type: native build 00:04:17.321 Program cat found: YES (/usr/bin/cat) 00:04:17.321 Project name: DPDK 00:04:17.321 Project version: 24.03.0 00:04:17.321 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:17.321 C linker for the host machine: cc ld.bfd 2.40-14 00:04:17.321 Host machine cpu family: x86_64 00:04:17.321 Host machine cpu: x86_64 00:04:17.321 Message: ## Building in Developer Mode ## 00:04:17.321 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:17.321 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:04:17.321 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:17.321 Program python3 found: YES (/usr/bin/python3) 00:04:17.321 Program cat found: YES (/usr/bin/cat) 00:04:17.321 Compiler for C supports arguments -march=native: YES 00:04:17.321 Checking for size of "void *" : 8 00:04:17.321 Checking for size of "void *" : 8 (cached) 00:04:17.321 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:17.321 Library m found: YES 00:04:17.321 Library numa found: YES 00:04:17.321 Has header "numaif.h" : YES 00:04:17.321 Library fdt found: NO 00:04:17.321 Library execinfo found: NO 00:04:17.321 Has header "execinfo.h" : YES 00:04:17.321 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:17.321 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:17.321 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:17.321 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:17.321 Run-time dependency openssl found: YES 3.1.1 00:04:17.321 Run-time dependency libpcap found: YES 1.10.4 00:04:17.321 Has header "pcap.h" with dependency libpcap: YES 00:04:17.321 Compiler for C supports arguments -Wcast-qual: YES 00:04:17.321 Compiler for C supports arguments -Wdeprecated: YES 00:04:17.321 Compiler for C supports arguments -Wformat: YES 00:04:17.321 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:17.321 Compiler for C supports arguments -Wformat-security: NO 00:04:17.321 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:17.321 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:17.322 Compiler for C supports arguments -Wnested-externs: YES 00:04:17.322 Compiler for C supports arguments -Wold-style-definition: YES 00:04:17.322 Compiler for C supports arguments -Wpointer-arith: YES 00:04:17.322 Compiler for C supports arguments -Wsign-compare: YES 00:04:17.322 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:17.322 Compiler for C supports arguments -Wundef: YES 00:04:17.322 Compiler for C supports arguments -Wwrite-strings: YES 00:04:17.322 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:17.322 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:17.322 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:17.322 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:17.322 Program objdump found: YES (/usr/bin/objdump) 00:04:17.322 Compiler for C supports arguments -mavx512f: YES 00:04:17.322 Checking if "AVX512 checking" compiles: YES 00:04:17.322 Fetching value of define "__SSE4_2__" : 1 00:04:17.322 Fetching value of define "__AES__" : 1 00:04:17.322 Fetching value of define "__AVX__" : 1 00:04:17.322 Fetching value of define "__AVX2__" : 1 00:04:17.322 Fetching value of define "__AVX512BW__" : 1 00:04:17.322 Fetching value of define "__AVX512CD__" : 1 00:04:17.322 Fetching value of define "__AVX512DQ__" : 1 00:04:17.322 Fetching value of define "__AVX512F__" : 1 00:04:17.322 Fetching value of define "__AVX512VL__" : 1 00:04:17.322 Fetching value of define "__PCLMUL__" : 1 00:04:17.322 Fetching value of define "__RDRND__" : 1 00:04:17.322 Fetching value of define "__RDSEED__" : 1 00:04:17.322 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:17.322 Fetching value of define "__znver1__" : (undefined) 00:04:17.322 Fetching value of define "__znver2__" : (undefined) 00:04:17.322 Fetching value of define "__znver3__" : (undefined) 00:04:17.322 Fetching value of define "__znver4__" : (undefined) 00:04:17.322 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:17.322 Message: lib/log: Defining dependency "log" 00:04:17.322 Message: lib/kvargs: Defining dependency "kvargs" 00:04:17.322 Message: lib/telemetry: Defining dependency "telemetry" 00:04:17.322 Checking for function "getentropy" : NO 00:04:17.322 Message: lib/eal: Defining dependency "eal" 00:04:17.322 Message: lib/ring: Defining dependency "ring" 00:04:17.322 Message: lib/rcu: Defining dependency "rcu" 00:04:17.322 Message: lib/mempool: Defining dependency "mempool" 00:04:17.322 Message: lib/mbuf: Defining dependency "mbuf" 00:04:17.322 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:17.322 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:17.322 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:17.322 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:17.322 Fetching value of define "__AVX512VL__" : 1 (cached) 00:04:17.322 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:04:17.322 Compiler for C supports arguments -mpclmul: YES 00:04:17.322 Compiler for C supports arguments -maes: YES 00:04:17.322 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:17.322 Compiler for C supports arguments -mavx512bw: YES 00:04:17.322 Compiler for C supports arguments -mavx512dq: YES 00:04:17.322 Compiler for C supports arguments -mavx512vl: YES 00:04:17.322 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:17.322 Compiler for C supports arguments -mavx2: YES 00:04:17.322 Compiler for C supports arguments -mavx: YES 00:04:17.322 Message: lib/net: Defining dependency "net" 00:04:17.322 Message: lib/meter: Defining dependency "meter" 00:04:17.322 Message: lib/ethdev: Defining dependency "ethdev" 00:04:17.322 Message: lib/pci: Defining dependency "pci" 00:04:17.322 Message: lib/cmdline: Defining dependency "cmdline" 00:04:17.322 Message: lib/hash: Defining dependency "hash" 00:04:17.322 Message: lib/timer: Defining dependency "timer" 00:04:17.322 Message: lib/compressdev: Defining dependency "compressdev" 00:04:17.322 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:17.322 Message: lib/dmadev: Defining dependency "dmadev" 00:04:17.322 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:17.322 Message: lib/power: Defining dependency "power" 00:04:17.322 Message: lib/reorder: Defining dependency "reorder" 00:04:17.322 Message: lib/security: Defining dependency "security" 00:04:17.322 Has header "linux/userfaultfd.h" : YES 00:04:17.322 Has header "linux/vduse.h" : YES 00:04:17.322 Message: lib/vhost: Defining dependency "vhost" 00:04:17.322 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:17.322 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:17.322 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:17.322 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:17.322 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:17.322 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:17.322 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:17.322 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:17.322 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:17.322 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:17.322 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:17.322 Configuring doxy-api-html.conf using configuration 00:04:17.322 Configuring doxy-api-man.conf using configuration 00:04:17.322 Program mandb found: YES (/usr/bin/mandb) 00:04:17.322 Program sphinx-build found: NO 00:04:17.322 Configuring rte_build_config.h using configuration 00:04:17.322 Message: 00:04:17.322 ================= 00:04:17.322 Applications Enabled 00:04:17.322 ================= 00:04:17.322 00:04:17.322 apps: 00:04:17.322 00:04:17.322 00:04:17.322 Message: 00:04:17.322 ================= 00:04:17.322 Libraries Enabled 00:04:17.322 ================= 00:04:17.322 00:04:17.322 libs: 00:04:17.322 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:17.322 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:17.322 cryptodev, dmadev, power, reorder, security, vhost, 00:04:17.322 00:04:17.322 Message: 00:04:17.322 =============== 00:04:17.322 Drivers Enabled 00:04:17.322 =============== 00:04:17.322 00:04:17.322 common: 00:04:17.322 00:04:17.322 bus: 00:04:17.322 pci, vdev, 00:04:17.322 mempool: 00:04:17.322 ring, 00:04:17.322 dma: 00:04:17.322 00:04:17.322 net: 00:04:17.322 00:04:17.322 crypto: 00:04:17.322 00:04:17.322 compress: 00:04:17.322 00:04:17.322 vdpa: 00:04:17.322 00:04:17.322 00:04:17.322 Message: 00:04:17.322 ================= 00:04:17.322 Content Skipped 00:04:17.322 ================= 00:04:17.322 00:04:17.322 apps: 00:04:17.322 dumpcap: explicitly disabled via build config 00:04:17.322 graph: explicitly disabled via build config 00:04:17.322 pdump: explicitly disabled via build config 00:04:17.322 proc-info: explicitly disabled via build config 00:04:17.322 test-acl: explicitly disabled via build config 00:04:17.322 test-bbdev: explicitly disabled via build config 00:04:17.322 test-cmdline: explicitly disabled via build config 00:04:17.323 test-compress-perf: explicitly disabled via build config 00:04:17.323 test-crypto-perf: explicitly disabled via build config 00:04:17.323 test-dma-perf: explicitly disabled via build config 00:04:17.323 test-eventdev: explicitly disabled via build config 00:04:17.323 test-fib: explicitly disabled via build config 00:04:17.323 test-flow-perf: explicitly disabled via build config 00:04:17.323 test-gpudev: explicitly disabled via build config 00:04:17.323 test-mldev: explicitly disabled via build config 00:04:17.323 test-pipeline: explicitly disabled via build config 00:04:17.323 test-pmd: explicitly disabled via build config 00:04:17.323 test-regex: explicitly disabled via build config 00:04:17.323 test-sad: explicitly disabled via build config 00:04:17.323 test-security-perf: explicitly disabled via build config 00:04:17.323 00:04:17.323 libs: 00:04:17.323 argparse: explicitly disabled via build config 00:04:17.323 metrics: explicitly disabled via build config 00:04:17.323 acl: explicitly disabled via build config 00:04:17.323 bbdev: explicitly disabled via build config 00:04:17.323 bitratestats: explicitly disabled via build config 00:04:17.323 bpf: explicitly disabled via build config 00:04:17.323 cfgfile: explicitly disabled via build config 00:04:17.323 distributor: explicitly disabled via build config 00:04:17.323 efd: explicitly disabled via build config 00:04:17.323 eventdev: explicitly disabled via build config 00:04:17.323 dispatcher: explicitly disabled via build config 00:04:17.323 gpudev: explicitly disabled via build config 00:04:17.323 gro: explicitly disabled via build config 00:04:17.323 gso: explicitly disabled via build config 00:04:17.323 ip_frag: explicitly disabled via build config 00:04:17.323 jobstats: explicitly disabled via build config 00:04:17.323 latencystats: explicitly disabled via build config 00:04:17.323 lpm: explicitly disabled via build config 00:04:17.323 member: explicitly disabled via build config 00:04:17.323 pcapng: explicitly disabled via build config 00:04:17.323 rawdev: explicitly disabled via build config 00:04:17.323 regexdev: explicitly disabled via build config 00:04:17.323 mldev: explicitly disabled via build config 00:04:17.323 rib: explicitly disabled via build config 00:04:17.323 sched: explicitly disabled via build config 00:04:17.323 stack: explicitly disabled via build config 00:04:17.323 ipsec: explicitly disabled via build config 00:04:17.323 pdcp: explicitly disabled via build config 00:04:17.323 fib: explicitly disabled via build config 00:04:17.323 port: explicitly disabled via build config 00:04:17.323 pdump: explicitly disabled via build config 00:04:17.323 table: explicitly disabled via build config 00:04:17.323 pipeline: explicitly disabled via build config 00:04:17.323 graph: explicitly disabled via build config 00:04:17.323 node: explicitly disabled via build config 00:04:17.323 00:04:17.323 drivers: 00:04:17.323 common/cpt: not in enabled drivers build config 00:04:17.323 common/dpaax: not in enabled drivers build config 00:04:17.323 common/iavf: not in enabled drivers build config 00:04:17.323 common/idpf: not in enabled drivers build config 00:04:17.323 common/ionic: not in enabled drivers build config 00:04:17.323 common/mvep: not in enabled drivers build config 00:04:17.323 common/octeontx: not in enabled drivers build config 00:04:17.323 bus/auxiliary: not in enabled drivers build config 00:04:17.323 bus/cdx: not in enabled drivers build config 00:04:17.323 bus/dpaa: not in enabled drivers build config 00:04:17.323 bus/fslmc: not in enabled drivers build config 00:04:17.323 bus/ifpga: not in enabled drivers build config 00:04:17.323 bus/platform: not in enabled drivers build config 00:04:17.323 bus/uacce: not in enabled drivers build config 00:04:17.323 bus/vmbus: not in enabled drivers build config 00:04:17.323 common/cnxk: not in enabled drivers build config 00:04:17.323 common/mlx5: not in enabled drivers build config 00:04:17.323 common/nfp: not in enabled drivers build config 00:04:17.323 common/nitrox: not in enabled drivers build config 00:04:17.323 common/qat: not in enabled drivers build config 00:04:17.323 common/sfc_efx: not in enabled drivers build config 00:04:17.323 mempool/bucket: not in enabled drivers build config 00:04:17.323 mempool/cnxk: not in enabled drivers build config 00:04:17.323 mempool/dpaa: not in enabled drivers build config 00:04:17.323 mempool/dpaa2: not in enabled drivers build config 00:04:17.323 mempool/octeontx: not in enabled drivers build config 00:04:17.323 mempool/stack: not in enabled drivers build config 00:04:17.323 dma/cnxk: not in enabled drivers build config 00:04:17.323 dma/dpaa: not in enabled drivers build config 00:04:17.323 dma/dpaa2: not in enabled drivers build config 00:04:17.323 dma/hisilicon: not in enabled drivers build config 00:04:17.323 dma/idxd: not in enabled drivers build config 00:04:17.323 dma/ioat: not in enabled drivers build config 00:04:17.323 dma/skeleton: not in enabled drivers build config 00:04:17.323 net/af_packet: not in enabled drivers build config 00:04:17.323 net/af_xdp: not in enabled drivers build config 00:04:17.323 net/ark: not in enabled drivers build config 00:04:17.323 net/atlantic: not in enabled drivers build config 00:04:17.323 net/avp: not in enabled drivers build config 00:04:17.323 net/axgbe: not in enabled drivers build config 00:04:17.323 net/bnx2x: not in enabled drivers build config 00:04:17.323 net/bnxt: not in enabled drivers build config 00:04:17.323 net/bonding: not in enabled drivers build config 00:04:17.323 net/cnxk: not in enabled drivers build config 00:04:17.323 net/cpfl: not in enabled drivers build config 00:04:17.323 net/cxgbe: not in enabled drivers build config 00:04:17.323 net/dpaa: not in enabled drivers build config 00:04:17.323 net/dpaa2: not in enabled drivers build config 00:04:17.323 net/e1000: not in enabled drivers build config 00:04:17.323 net/ena: not in enabled drivers build config 00:04:17.323 net/enetc: not in enabled drivers build config 00:04:17.323 net/enetfec: not in enabled drivers build config 00:04:17.323 net/enic: not in enabled drivers build config 00:04:17.323 net/failsafe: not in enabled drivers build config 00:04:17.323 net/fm10k: not in enabled drivers build config 00:04:17.323 net/gve: not in enabled drivers build config 00:04:17.323 net/hinic: not in enabled drivers build config 00:04:17.323 net/hns3: not in enabled drivers build config 00:04:17.323 net/i40e: not in enabled drivers build config 00:04:17.323 net/iavf: not in enabled drivers build config 00:04:17.323 net/ice: not in enabled drivers build config 00:04:17.323 net/idpf: not in enabled drivers build config 00:04:17.323 net/igc: not in enabled drivers build config 00:04:17.323 net/ionic: not in enabled drivers build config 00:04:17.323 net/ipn3ke: not in enabled drivers build config 00:04:17.323 net/ixgbe: not in enabled drivers build config 00:04:17.323 net/mana: not in enabled drivers build config 00:04:17.323 net/memif: not in enabled drivers build config 00:04:17.323 net/mlx4: not in enabled drivers build config 00:04:17.323 net/mlx5: not in enabled drivers build config 00:04:17.323 net/mvneta: not in enabled drivers build config 00:04:17.323 net/mvpp2: not in enabled drivers build config 00:04:17.323 net/netvsc: not in enabled drivers build config 00:04:17.323 net/nfb: not in enabled drivers build config 00:04:17.323 net/nfp: not in enabled drivers build config 00:04:17.323 net/ngbe: not in enabled drivers build config 00:04:17.323 net/null: not in enabled drivers build config 00:04:17.323 net/octeontx: not in enabled drivers build config 00:04:17.324 net/octeon_ep: not in enabled drivers build config 00:04:17.324 net/pcap: not in enabled drivers build config 00:04:17.324 net/pfe: not in enabled drivers build config 00:04:17.324 net/qede: not in enabled drivers build config 00:04:17.324 net/ring: not in enabled drivers build config 00:04:17.324 net/sfc: not in enabled drivers build config 00:04:17.324 net/softnic: not in enabled drivers build config 00:04:17.324 net/tap: not in enabled drivers build config 00:04:17.324 net/thunderx: not in enabled drivers build config 00:04:17.324 net/txgbe: not in enabled drivers build config 00:04:17.324 net/vdev_netvsc: not in enabled drivers build config 00:04:17.324 net/vhost: not in enabled drivers build config 00:04:17.324 net/virtio: not in enabled drivers build config 00:04:17.324 net/vmxnet3: not in enabled drivers build config 00:04:17.324 raw/*: missing internal dependency, "rawdev" 00:04:17.324 crypto/armv8: not in enabled drivers build config 00:04:17.324 crypto/bcmfs: not in enabled drivers build config 00:04:17.324 crypto/caam_jr: not in enabled drivers build config 00:04:17.324 crypto/ccp: not in enabled drivers build config 00:04:17.324 crypto/cnxk: not in enabled drivers build config 00:04:17.324 crypto/dpaa_sec: not in enabled drivers build config 00:04:17.324 crypto/dpaa2_sec: not in enabled drivers build config 00:04:17.324 crypto/ipsec_mb: not in enabled drivers build config 00:04:17.324 crypto/mlx5: not in enabled drivers build config 00:04:17.324 crypto/mvsam: not in enabled drivers build config 00:04:17.324 crypto/nitrox: not in enabled drivers build config 00:04:17.324 crypto/null: not in enabled drivers build config 00:04:17.324 crypto/octeontx: not in enabled drivers build config 00:04:17.324 crypto/openssl: not in enabled drivers build config 00:04:17.324 crypto/scheduler: not in enabled drivers build config 00:04:17.324 crypto/uadk: not in enabled drivers build config 00:04:17.324 crypto/virtio: not in enabled drivers build config 00:04:17.324 compress/isal: not in enabled drivers build config 00:04:17.324 compress/mlx5: not in enabled drivers build config 00:04:17.324 compress/nitrox: not in enabled drivers build config 00:04:17.324 compress/octeontx: not in enabled drivers build config 00:04:17.324 compress/zlib: not in enabled drivers build config 00:04:17.324 regex/*: missing internal dependency, "regexdev" 00:04:17.324 ml/*: missing internal dependency, "mldev" 00:04:17.324 vdpa/ifc: not in enabled drivers build config 00:04:17.324 vdpa/mlx5: not in enabled drivers build config 00:04:17.324 vdpa/nfp: not in enabled drivers build config 00:04:17.324 vdpa/sfc: not in enabled drivers build config 00:04:17.324 event/*: missing internal dependency, "eventdev" 00:04:17.324 baseband/*: missing internal dependency, "bbdev" 00:04:17.324 gpu/*: missing internal dependency, "gpudev" 00:04:17.324 00:04:17.324 00:04:17.324 Build targets in project: 85 00:04:17.324 00:04:17.324 DPDK 24.03.0 00:04:17.324 00:04:17.324 User defined options 00:04:17.324 buildtype : debug 00:04:17.324 default_library : shared 00:04:17.324 libdir : lib 00:04:17.324 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:04:17.324 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:17.324 c_link_args : 00:04:17.324 cpu_instruction_set: native 00:04:17.324 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:04:17.324 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:04:17.324 enable_docs : false 00:04:17.324 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:04:17.324 enable_kmods : false 00:04:17.324 max_lcores : 128 00:04:17.324 tests : false 00:04:17.324 00:04:17.324 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:17.324 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:04:17.324 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:17.324 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:17.324 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:17.324 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:17.324 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:17.324 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:17.324 [7/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:17.324 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:17.324 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:17.324 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:17.324 [11/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:17.324 [12/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:17.324 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:17.324 [14/268] Linking static target lib/librte_kvargs.a 00:04:17.324 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:17.324 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:17.324 [17/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:17.324 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:17.324 [19/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:17.324 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:17.324 [21/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:17.324 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:17.324 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:17.324 [24/268] Linking static target lib/librte_pci.a 00:04:17.324 [25/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:17.324 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:17.324 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:17.324 [28/268] Linking static target lib/librte_log.a 00:04:17.324 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:17.324 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:17.585 [31/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:17.585 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:17.585 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:17.585 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:17.585 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:17.585 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:17.585 [37/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:17.585 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:17.585 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:17.585 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:17.585 [41/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:17.585 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:17.585 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:17.585 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:17.585 [45/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:17.585 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:17.585 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:17.585 [48/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:17.585 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:17.585 [50/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:17.585 [51/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:17.851 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:17.851 [53/268] Linking static target lib/librte_meter.a 00:04:17.851 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:17.851 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:17.851 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:17.851 [57/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:17.851 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:17.851 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:17.851 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:17.851 [61/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:17.851 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:17.851 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:17.851 [64/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:17.851 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:17.851 [66/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:17.851 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:17.851 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:17.851 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:17.851 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:17.851 [71/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:17.851 [72/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:17.851 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:17.851 [74/268] Linking static target lib/librte_telemetry.a 00:04:17.851 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:17.851 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:17.851 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:17.851 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:17.851 [79/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:17.851 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:17.851 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:17.851 [82/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:17.851 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:17.851 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:17.851 [85/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.851 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:17.851 [87/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:17.851 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:17.851 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:17.851 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:17.851 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:17.851 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:17.851 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:17.851 [94/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:17.851 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:17.851 [96/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:17.851 [97/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:17.851 [98/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:17.851 [99/268] Linking static target lib/librte_cmdline.a 00:04:17.851 [100/268] Linking static target lib/librte_ring.a 00:04:17.851 [101/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:17.851 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:17.851 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:17.851 [104/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:17.851 [105/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:17.851 [106/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:17.851 [107/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:17.851 [108/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:17.851 [109/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:17.851 [110/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:17.851 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:17.851 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:17.851 [113/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:17.851 [114/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:17.851 [115/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:17.851 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:17.851 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:17.851 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:17.851 [119/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:17.851 [120/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.851 [121/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:17.851 [122/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:17.851 [123/268] Linking static target lib/librte_timer.a 00:04:17.851 [124/268] Linking static target lib/librte_rcu.a 00:04:17.851 [125/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:17.851 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:17.851 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:17.851 [128/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:17.851 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:17.851 [130/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:17.851 [131/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:17.851 [132/268] Linking static target lib/librte_mempool.a 00:04:17.851 [133/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:17.851 [134/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:17.851 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:17.851 [136/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:17.851 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:17.851 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:17.851 [139/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:17.851 [140/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:17.851 [141/268] Linking static target lib/librte_compressdev.a 00:04:17.851 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:17.851 [143/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:18.110 [144/268] Linking static target lib/librte_net.a 00:04:18.110 [145/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:18.110 [146/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:18.110 [147/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.110 [148/268] Linking static target lib/librte_dmadev.a 00:04:18.110 [149/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:18.110 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:18.110 [151/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:18.110 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:18.110 [153/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:18.110 [154/268] Linking static target lib/librte_eal.a 00:04:18.110 [155/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:18.110 [156/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:18.110 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:18.110 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:18.110 [159/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.110 [160/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.110 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:18.110 [162/268] Linking target lib/librte_log.so.24.1 00:04:18.110 [163/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:18.110 [164/268] Linking static target lib/librte_mbuf.a 00:04:18.110 [165/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:18.110 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:18.110 [167/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:18.110 [168/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:18.110 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:18.110 [170/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:18.110 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:18.110 [172/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.110 [173/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:18.110 [174/268] Linking static target lib/librte_power.a 00:04:18.110 [175/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:18.110 [176/268] Linking static target lib/librte_hash.a 00:04:18.369 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:18.369 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:18.369 [179/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:18.369 [180/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:18.369 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:18.369 [182/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:18.369 [183/268] Linking static target lib/librte_security.a 00:04:18.369 [184/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.369 [185/268] Linking static target lib/librte_reorder.a 00:04:18.369 [186/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:18.369 [187/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.369 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:18.369 [189/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.369 [190/268] Linking target lib/librte_kvargs.so.24.1 00:04:18.369 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:18.369 [192/268] Linking target lib/librte_telemetry.so.24.1 00:04:18.369 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:18.369 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:18.369 [195/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:18.369 [196/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:18.369 [197/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:18.369 [198/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:18.369 [199/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:18.369 [200/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:18.369 [201/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:18.369 [202/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:18.369 [203/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:18.369 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:18.369 [205/268] Linking static target drivers/librte_bus_vdev.a 00:04:18.627 [206/268] Linking static target lib/librte_cryptodev.a 00:04:18.627 [207/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:18.627 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:18.627 [209/268] Linking static target drivers/librte_bus_pci.a 00:04:18.627 [210/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:18.627 [211/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.627 [212/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.627 [213/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:18.627 [214/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:18.627 [215/268] Linking static target drivers/librte_mempool_ring.a 00:04:18.627 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.627 [217/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.886 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:18.886 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.886 [220/268] Linking static target lib/librte_ethdev.a 00:04:18.886 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.886 [222/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.886 [223/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.145 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:19.145 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.145 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.405 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.974 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:20.234 [229/268] Linking static target lib/librte_vhost.a 00:04:20.495 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.885 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.167 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.108 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.108 [234/268] Linking target lib/librte_eal.so.24.1 00:04:28.108 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:28.108 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:28.108 [237/268] Linking target lib/librte_ring.so.24.1 00:04:28.108 [238/268] Linking target lib/librte_meter.so.24.1 00:04:28.108 [239/268] Linking target lib/librte_pci.so.24.1 00:04:28.108 [240/268] Linking target lib/librte_dmadev.so.24.1 00:04:28.108 [241/268] Linking target lib/librte_timer.so.24.1 00:04:28.366 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:28.367 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:28.367 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:28.367 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:28.367 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:28.367 [247/268] Linking target lib/librte_rcu.so.24.1 00:04:28.367 [248/268] Linking target lib/librte_mempool.so.24.1 00:04:28.367 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:28.367 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:28.367 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:28.626 [252/268] Linking target lib/librte_mbuf.so.24.1 00:04:28.626 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:28.626 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:28.626 [255/268] Linking target lib/librte_compressdev.so.24.1 00:04:28.626 [256/268] Linking target lib/librte_net.so.24.1 00:04:28.626 [257/268] Linking target lib/librte_reorder.so.24.1 00:04:28.626 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:04:28.884 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:28.884 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:28.884 [261/268] Linking target lib/librte_cmdline.so.24.1 00:04:28.884 [262/268] Linking target lib/librte_hash.so.24.1 00:04:28.884 [263/268] Linking target lib/librte_ethdev.so.24.1 00:04:28.884 [264/268] Linking target lib/librte_security.so.24.1 00:04:29.143 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:29.143 [266/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:29.143 [267/268] Linking target lib/librte_power.so.24.1 00:04:29.143 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:29.143 INFO: autodetecting backend as ninja 00:04:29.143 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:04:39.132 CC lib/ut_mock/mock.o 00:04:39.132 CC lib/ut/ut.o 00:04:39.132 CC lib/log/log.o 00:04:39.132 CC lib/log/log_flags.o 00:04:39.132 CC lib/log/log_deprecated.o 00:04:39.132 LIB libspdk_ut.a 00:04:39.132 LIB libspdk_ut_mock.a 00:04:39.132 LIB libspdk_log.a 00:04:39.132 SO libspdk_ut.so.2.0 00:04:39.132 SO libspdk_ut_mock.so.6.0 00:04:39.132 SO libspdk_log.so.7.1 00:04:39.132 SYMLINK libspdk_ut.so 00:04:39.132 SYMLINK libspdk_ut_mock.so 00:04:39.132 SYMLINK libspdk_log.so 00:04:39.132 CXX lib/trace_parser/trace.o 00:04:39.132 CC lib/dma/dma.o 00:04:39.132 CC lib/util/base64.o 00:04:39.132 CC lib/util/bit_array.o 00:04:39.132 CC lib/ioat/ioat.o 00:04:39.132 CC lib/util/cpuset.o 00:04:39.132 CC lib/util/crc16.o 00:04:39.132 CC lib/util/crc32.o 00:04:39.132 CC lib/util/crc32c.o 00:04:39.132 CC lib/util/crc32_ieee.o 00:04:39.132 CC lib/util/crc64.o 00:04:39.132 CC lib/util/dif.o 00:04:39.132 CC lib/util/fd.o 00:04:39.132 CC lib/util/fd_group.o 00:04:39.132 CC lib/util/file.o 00:04:39.132 CC lib/util/hexlify.o 00:04:39.132 CC lib/util/iov.o 00:04:39.132 CC lib/util/math.o 00:04:39.132 CC lib/util/net.o 00:04:39.132 CC lib/util/strerror_tls.o 00:04:39.132 CC lib/util/pipe.o 00:04:39.132 CC lib/util/string.o 00:04:39.132 CC lib/util/uuid.o 00:04:39.132 CC lib/util/xor.o 00:04:39.132 CC lib/util/zipf.o 00:04:39.132 CC lib/util/md5.o 00:04:39.132 CC lib/vfio_user/host/vfio_user.o 00:04:39.133 CC lib/vfio_user/host/vfio_user_pci.o 00:04:39.133 LIB libspdk_dma.a 00:04:39.133 SO libspdk_dma.so.5.0 00:04:39.133 LIB libspdk_ioat.a 00:04:39.133 SO libspdk_ioat.so.7.0 00:04:39.133 SYMLINK libspdk_dma.so 00:04:39.133 SYMLINK libspdk_ioat.so 00:04:39.133 LIB libspdk_vfio_user.a 00:04:39.133 SO libspdk_vfio_user.so.5.0 00:04:39.133 LIB libspdk_util.a 00:04:39.133 SYMLINK libspdk_vfio_user.so 00:04:39.133 SO libspdk_util.so.10.1 00:04:39.133 SYMLINK libspdk_util.so 00:04:39.133 CC lib/conf/conf.o 00:04:39.133 CC lib/vmd/vmd.o 00:04:39.133 CC lib/vmd/led.o 00:04:39.133 CC lib/env_dpdk/env.o 00:04:39.133 CC lib/env_dpdk/memory.o 00:04:39.133 CC lib/env_dpdk/pci.o 00:04:39.133 CC lib/env_dpdk/init.o 00:04:39.133 CC lib/rdma_utils/rdma_utils.o 00:04:39.133 CC lib/env_dpdk/threads.o 00:04:39.133 CC lib/json/json_parse.o 00:04:39.133 CC lib/env_dpdk/pci_ioat.o 00:04:39.133 CC lib/env_dpdk/pci_virtio.o 00:04:39.133 CC lib/json/json_util.o 00:04:39.133 CC lib/env_dpdk/pci_vmd.o 00:04:39.133 CC lib/json/json_write.o 00:04:39.133 CC lib/env_dpdk/pci_idxd.o 00:04:39.133 CC lib/env_dpdk/pci_event.o 00:04:39.133 CC lib/idxd/idxd.o 00:04:39.133 CC lib/env_dpdk/sigbus_handler.o 00:04:39.133 CC lib/idxd/idxd_user.o 00:04:39.133 CC lib/env_dpdk/pci_dpdk.o 00:04:39.133 CC lib/idxd/idxd_kernel.o 00:04:39.133 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:39.133 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:39.392 LIB libspdk_conf.a 00:04:39.392 SO libspdk_conf.so.6.0 00:04:39.392 LIB libspdk_json.a 00:04:39.392 LIB libspdk_rdma_utils.a 00:04:39.651 SO libspdk_rdma_utils.so.1.0 00:04:39.651 SYMLINK libspdk_conf.so 00:04:39.651 SO libspdk_json.so.6.0 00:04:39.651 SYMLINK libspdk_rdma_utils.so 00:04:39.651 SYMLINK libspdk_json.so 00:04:39.651 LIB libspdk_vmd.a 00:04:39.651 LIB libspdk_idxd.a 00:04:39.651 SO libspdk_vmd.so.6.0 00:04:39.651 SO libspdk_idxd.so.12.1 00:04:39.913 LIB libspdk_trace_parser.a 00:04:39.913 SYMLINK libspdk_vmd.so 00:04:39.913 SO libspdk_trace_parser.so.6.0 00:04:39.913 SYMLINK libspdk_idxd.so 00:04:39.913 CC lib/rdma_provider/common.o 00:04:39.913 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:39.913 CC lib/jsonrpc/jsonrpc_server.o 00:04:39.913 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:39.913 CC lib/jsonrpc/jsonrpc_client.o 00:04:39.913 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:39.913 SYMLINK libspdk_trace_parser.so 00:04:40.172 LIB libspdk_rdma_provider.a 00:04:40.172 SO libspdk_rdma_provider.so.7.0 00:04:40.172 LIB libspdk_jsonrpc.a 00:04:40.172 SO libspdk_jsonrpc.so.6.0 00:04:40.172 SYMLINK libspdk_rdma_provider.so 00:04:40.172 SYMLINK libspdk_jsonrpc.so 00:04:40.172 LIB libspdk_env_dpdk.a 00:04:40.172 SO libspdk_env_dpdk.so.15.1 00:04:40.432 SYMLINK libspdk_env_dpdk.so 00:04:40.432 CC lib/rpc/rpc.o 00:04:40.690 LIB libspdk_rpc.a 00:04:40.690 SO libspdk_rpc.so.6.0 00:04:40.690 SYMLINK libspdk_rpc.so 00:04:41.259 CC lib/trace/trace.o 00:04:41.259 CC lib/keyring/keyring.o 00:04:41.259 CC lib/trace/trace_flags.o 00:04:41.259 CC lib/keyring/keyring_rpc.o 00:04:41.259 CC lib/trace/trace_rpc.o 00:04:41.259 CC lib/notify/notify.o 00:04:41.259 CC lib/notify/notify_rpc.o 00:04:41.259 LIB libspdk_notify.a 00:04:41.259 SO libspdk_notify.so.6.0 00:04:41.259 LIB libspdk_keyring.a 00:04:41.259 LIB libspdk_trace.a 00:04:41.259 SO libspdk_keyring.so.2.0 00:04:41.259 SYMLINK libspdk_notify.so 00:04:41.259 SO libspdk_trace.so.11.0 00:04:41.259 SYMLINK libspdk_keyring.so 00:04:41.518 SYMLINK libspdk_trace.so 00:04:41.779 CC lib/thread/thread.o 00:04:41.779 CC lib/sock/sock.o 00:04:41.779 CC lib/thread/iobuf.o 00:04:41.779 CC lib/sock/sock_rpc.o 00:04:42.038 LIB libspdk_sock.a 00:04:42.038 SO libspdk_sock.so.10.0 00:04:42.038 SYMLINK libspdk_sock.so 00:04:42.297 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:42.297 CC lib/nvme/nvme_ctrlr.o 00:04:42.297 CC lib/nvme/nvme_fabric.o 00:04:42.297 CC lib/nvme/nvme_ns_cmd.o 00:04:42.297 CC lib/nvme/nvme_ns.o 00:04:42.297 CC lib/nvme/nvme_pcie_common.o 00:04:42.297 CC lib/nvme/nvme_pcie.o 00:04:42.297 CC lib/nvme/nvme_qpair.o 00:04:42.297 CC lib/nvme/nvme.o 00:04:42.297 CC lib/nvme/nvme_quirks.o 00:04:42.297 CC lib/nvme/nvme_transport.o 00:04:42.297 CC lib/nvme/nvme_discovery.o 00:04:42.297 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:42.297 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:42.297 CC lib/nvme/nvme_tcp.o 00:04:42.297 CC lib/nvme/nvme_opal.o 00:04:42.297 CC lib/nvme/nvme_io_msg.o 00:04:42.297 CC lib/nvme/nvme_poll_group.o 00:04:42.297 CC lib/nvme/nvme_zns.o 00:04:42.297 CC lib/nvme/nvme_stubs.o 00:04:42.297 CC lib/nvme/nvme_auth.o 00:04:42.297 CC lib/nvme/nvme_cuse.o 00:04:42.297 CC lib/nvme/nvme_vfio_user.o 00:04:42.297 CC lib/nvme/nvme_rdma.o 00:04:42.864 LIB libspdk_thread.a 00:04:42.864 SO libspdk_thread.so.11.0 00:04:42.864 SYMLINK libspdk_thread.so 00:04:43.123 CC lib/virtio/virtio.o 00:04:43.123 CC lib/virtio/virtio_vhost_user.o 00:04:43.123 CC lib/virtio/virtio_vfio_user.o 00:04:43.123 CC lib/virtio/virtio_pci.o 00:04:43.123 CC lib/blob/blobstore.o 00:04:43.123 CC lib/vfu_tgt/tgt_endpoint.o 00:04:43.123 CC lib/blob/request.o 00:04:43.123 CC lib/vfu_tgt/tgt_rpc.o 00:04:43.123 CC lib/blob/zeroes.o 00:04:43.123 CC lib/blob/blob_bs_dev.o 00:04:43.123 CC lib/accel/accel.o 00:04:43.123 CC lib/fsdev/fsdev.o 00:04:43.123 CC lib/accel/accel_rpc.o 00:04:43.123 CC lib/accel/accel_sw.o 00:04:43.123 CC lib/fsdev/fsdev_io.o 00:04:43.123 CC lib/init/subsystem_rpc.o 00:04:43.123 CC lib/init/json_config.o 00:04:43.123 CC lib/fsdev/fsdev_rpc.o 00:04:43.123 CC lib/init/subsystem.o 00:04:43.123 CC lib/init/rpc.o 00:04:43.383 LIB libspdk_init.a 00:04:43.383 SO libspdk_init.so.6.0 00:04:43.383 LIB libspdk_virtio.a 00:04:43.383 LIB libspdk_vfu_tgt.a 00:04:43.383 SO libspdk_virtio.so.7.0 00:04:43.383 SO libspdk_vfu_tgt.so.3.0 00:04:43.383 SYMLINK libspdk_init.so 00:04:43.383 SYMLINK libspdk_virtio.so 00:04:43.383 SYMLINK libspdk_vfu_tgt.so 00:04:43.644 LIB libspdk_fsdev.a 00:04:43.644 SO libspdk_fsdev.so.2.0 00:04:43.644 CC lib/event/app.o 00:04:43.644 CC lib/event/log_rpc.o 00:04:43.644 CC lib/event/reactor.o 00:04:43.644 CC lib/event/app_rpc.o 00:04:43.644 CC lib/event/scheduler_static.o 00:04:43.644 SYMLINK libspdk_fsdev.so 00:04:43.905 LIB libspdk_accel.a 00:04:43.905 SO libspdk_accel.so.16.0 00:04:43.905 SYMLINK libspdk_accel.so 00:04:43.905 LIB libspdk_nvme.a 00:04:43.905 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:43.905 LIB libspdk_event.a 00:04:44.165 SO libspdk_event.so.14.0 00:04:44.165 SO libspdk_nvme.so.15.0 00:04:44.165 SYMLINK libspdk_event.so 00:04:44.165 SYMLINK libspdk_nvme.so 00:04:44.165 CC lib/bdev/bdev.o 00:04:44.165 CC lib/bdev/bdev_rpc.o 00:04:44.165 CC lib/bdev/bdev_zone.o 00:04:44.165 CC lib/bdev/scsi_nvme.o 00:04:44.165 CC lib/bdev/part.o 00:04:44.425 LIB libspdk_fuse_dispatcher.a 00:04:44.425 SO libspdk_fuse_dispatcher.so.1.0 00:04:44.425 SYMLINK libspdk_fuse_dispatcher.so 00:04:44.994 LIB libspdk_blob.a 00:04:45.254 SO libspdk_blob.so.12.0 00:04:45.254 SYMLINK libspdk_blob.so 00:04:45.513 CC lib/blobfs/blobfs.o 00:04:45.513 CC lib/blobfs/tree.o 00:04:45.513 CC lib/lvol/lvol.o 00:04:46.083 LIB libspdk_bdev.a 00:04:46.083 LIB libspdk_blobfs.a 00:04:46.083 SO libspdk_bdev.so.17.0 00:04:46.083 SO libspdk_blobfs.so.11.0 00:04:46.083 LIB libspdk_lvol.a 00:04:46.083 SO libspdk_lvol.so.11.0 00:04:46.083 SYMLINK libspdk_bdev.so 00:04:46.083 SYMLINK libspdk_blobfs.so 00:04:46.343 SYMLINK libspdk_lvol.so 00:04:46.603 CC lib/nbd/nbd.o 00:04:46.603 CC lib/nvmf/ctrlr.o 00:04:46.603 CC lib/nbd/nbd_rpc.o 00:04:46.603 CC lib/nvmf/ctrlr_discovery.o 00:04:46.603 CC lib/ublk/ublk.o 00:04:46.603 CC lib/nvmf/ctrlr_bdev.o 00:04:46.603 CC lib/ublk/ublk_rpc.o 00:04:46.603 CC lib/nvmf/subsystem.o 00:04:46.603 CC lib/scsi/dev.o 00:04:46.603 CC lib/nvmf/nvmf.o 00:04:46.603 CC lib/scsi/lun.o 00:04:46.603 CC lib/nvmf/nvmf_rpc.o 00:04:46.603 CC lib/scsi/port.o 00:04:46.603 CC lib/nvmf/transport.o 00:04:46.603 CC lib/scsi/scsi.o 00:04:46.603 CC lib/ftl/ftl_core.o 00:04:46.603 CC lib/scsi/scsi_bdev.o 00:04:46.603 CC lib/nvmf/tcp.o 00:04:46.603 CC lib/ftl/ftl_init.o 00:04:46.603 CC lib/nvmf/stubs.o 00:04:46.603 CC lib/scsi/scsi_pr.o 00:04:46.603 CC lib/ftl/ftl_layout.o 00:04:46.603 CC lib/ftl/ftl_debug.o 00:04:46.603 CC lib/nvmf/mdns_server.o 00:04:46.603 CC lib/scsi/scsi_rpc.o 00:04:46.603 CC lib/scsi/task.o 00:04:46.603 CC lib/ftl/ftl_io.o 00:04:46.603 CC lib/nvmf/vfio_user.o 00:04:46.603 CC lib/ftl/ftl_sb.o 00:04:46.603 CC lib/nvmf/rdma.o 00:04:46.603 CC lib/nvmf/auth.o 00:04:46.603 CC lib/ftl/ftl_l2p.o 00:04:46.603 CC lib/ftl/ftl_l2p_flat.o 00:04:46.603 CC lib/ftl/ftl_band.o 00:04:46.603 CC lib/ftl/ftl_nv_cache.o 00:04:46.603 CC lib/ftl/ftl_band_ops.o 00:04:46.603 CC lib/ftl/ftl_writer.o 00:04:46.603 CC lib/ftl/ftl_rq.o 00:04:46.603 CC lib/ftl/ftl_reloc.o 00:04:46.603 CC lib/ftl/ftl_l2p_cache.o 00:04:46.603 CC lib/ftl/ftl_p2l.o 00:04:46.603 CC lib/ftl/ftl_p2l_log.o 00:04:46.603 CC lib/ftl/mngt/ftl_mngt.o 00:04:46.603 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:46.603 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:46.603 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:46.603 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:46.603 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:46.603 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:46.603 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:46.603 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:46.603 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:46.603 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:46.603 CC lib/ftl/utils/ftl_conf.o 00:04:46.603 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:46.603 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:46.603 CC lib/ftl/utils/ftl_mempool.o 00:04:46.603 CC lib/ftl/utils/ftl_md.o 00:04:46.603 CC lib/ftl/utils/ftl_property.o 00:04:46.603 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:46.603 CC lib/ftl/utils/ftl_bitmap.o 00:04:46.603 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:46.603 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:46.603 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:46.603 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:46.603 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:46.603 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:46.603 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:46.603 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:46.603 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:46.603 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:46.603 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:46.603 CC lib/ftl/base/ftl_base_dev.o 00:04:46.603 CC lib/ftl/ftl_trace.o 00:04:46.603 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:46.603 CC lib/ftl/base/ftl_base_bdev.o 00:04:47.171 LIB libspdk_scsi.a 00:04:47.171 SO libspdk_scsi.so.9.0 00:04:47.171 SYMLINK libspdk_scsi.so 00:04:47.171 LIB libspdk_nbd.a 00:04:47.171 SO libspdk_nbd.so.7.0 00:04:47.171 LIB libspdk_ublk.a 00:04:47.171 SYMLINK libspdk_nbd.so 00:04:47.171 SO libspdk_ublk.so.3.0 00:04:47.430 SYMLINK libspdk_ublk.so 00:04:47.430 LIB libspdk_ftl.a 00:04:47.430 CC lib/iscsi/conn.o 00:04:47.430 CC lib/iscsi/init_grp.o 00:04:47.430 CC lib/iscsi/iscsi.o 00:04:47.430 CC lib/iscsi/param.o 00:04:47.430 CC lib/iscsi/portal_grp.o 00:04:47.430 CC lib/iscsi/tgt_node.o 00:04:47.430 CC lib/iscsi/task.o 00:04:47.430 CC lib/iscsi/iscsi_subsystem.o 00:04:47.431 CC lib/iscsi/iscsi_rpc.o 00:04:47.431 CC lib/vhost/vhost_scsi.o 00:04:47.431 CC lib/vhost/vhost.o 00:04:47.431 CC lib/vhost/vhost_rpc.o 00:04:47.431 CC lib/vhost/vhost_blk.o 00:04:47.431 CC lib/vhost/rte_vhost_user.o 00:04:47.431 SO libspdk_ftl.so.9.0 00:04:47.690 SYMLINK libspdk_ftl.so 00:04:48.260 LIB libspdk_nvmf.a 00:04:48.260 SO libspdk_nvmf.so.20.0 00:04:48.260 LIB libspdk_vhost.a 00:04:48.260 SO libspdk_vhost.so.8.0 00:04:48.260 SYMLINK libspdk_vhost.so 00:04:48.260 SYMLINK libspdk_nvmf.so 00:04:48.520 LIB libspdk_iscsi.a 00:04:48.520 SO libspdk_iscsi.so.8.0 00:04:48.520 SYMLINK libspdk_iscsi.so 00:04:49.087 CC module/env_dpdk/env_dpdk_rpc.o 00:04:49.087 CC module/vfu_device/vfu_virtio.o 00:04:49.087 CC module/vfu_device/vfu_virtio_scsi.o 00:04:49.087 CC module/vfu_device/vfu_virtio_blk.o 00:04:49.087 CC module/vfu_device/vfu_virtio_rpc.o 00:04:49.087 CC module/vfu_device/vfu_virtio_fs.o 00:04:49.345 LIB libspdk_env_dpdk_rpc.a 00:04:49.345 CC module/fsdev/aio/fsdev_aio.o 00:04:49.345 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:49.345 CC module/keyring/linux/keyring.o 00:04:49.345 CC module/fsdev/aio/linux_aio_mgr.o 00:04:49.345 CC module/accel/ioat/accel_ioat.o 00:04:49.345 CC module/keyring/linux/keyring_rpc.o 00:04:49.345 CC module/accel/iaa/accel_iaa.o 00:04:49.345 CC module/accel/iaa/accel_iaa_rpc.o 00:04:49.345 CC module/accel/ioat/accel_ioat_rpc.o 00:04:49.345 CC module/keyring/file/keyring.o 00:04:49.345 CC module/keyring/file/keyring_rpc.o 00:04:49.345 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:49.345 CC module/scheduler/gscheduler/gscheduler.o 00:04:49.345 CC module/blob/bdev/blob_bdev.o 00:04:49.345 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:49.345 CC module/accel/dsa/accel_dsa.o 00:04:49.345 CC module/accel/error/accel_error.o 00:04:49.345 CC module/accel/dsa/accel_dsa_rpc.o 00:04:49.345 CC module/accel/error/accel_error_rpc.o 00:04:49.345 CC module/sock/posix/posix.o 00:04:49.345 SO libspdk_env_dpdk_rpc.so.6.0 00:04:49.345 SYMLINK libspdk_env_dpdk_rpc.so 00:04:49.345 LIB libspdk_keyring_file.a 00:04:49.345 LIB libspdk_keyring_linux.a 00:04:49.345 LIB libspdk_scheduler_gscheduler.a 00:04:49.345 LIB libspdk_scheduler_dpdk_governor.a 00:04:49.345 SO libspdk_keyring_linux.so.1.0 00:04:49.345 SO libspdk_scheduler_gscheduler.so.4.0 00:04:49.345 SO libspdk_keyring_file.so.2.0 00:04:49.345 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:49.345 LIB libspdk_accel_error.a 00:04:49.345 LIB libspdk_scheduler_dynamic.a 00:04:49.345 LIB libspdk_accel_ioat.a 00:04:49.345 LIB libspdk_accel_iaa.a 00:04:49.345 SO libspdk_accel_error.so.2.0 00:04:49.345 SO libspdk_scheduler_dynamic.so.4.0 00:04:49.345 SYMLINK libspdk_keyring_linux.so 00:04:49.604 SO libspdk_accel_ioat.so.6.0 00:04:49.604 SYMLINK libspdk_scheduler_gscheduler.so 00:04:49.604 SYMLINK libspdk_keyring_file.so 00:04:49.604 SO libspdk_accel_iaa.so.3.0 00:04:49.604 LIB libspdk_blob_bdev.a 00:04:49.604 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:49.604 LIB libspdk_accel_dsa.a 00:04:49.604 SYMLINK libspdk_accel_error.so 00:04:49.604 SO libspdk_blob_bdev.so.12.0 00:04:49.604 SO libspdk_accel_dsa.so.5.0 00:04:49.604 SYMLINK libspdk_accel_ioat.so 00:04:49.604 SYMLINK libspdk_scheduler_dynamic.so 00:04:49.604 SYMLINK libspdk_accel_iaa.so 00:04:49.604 LIB libspdk_vfu_device.a 00:04:49.604 SYMLINK libspdk_blob_bdev.so 00:04:49.604 SYMLINK libspdk_accel_dsa.so 00:04:49.604 SO libspdk_vfu_device.so.3.0 00:04:49.604 SYMLINK libspdk_vfu_device.so 00:04:49.863 LIB libspdk_fsdev_aio.a 00:04:49.863 SO libspdk_fsdev_aio.so.1.0 00:04:49.863 LIB libspdk_sock_posix.a 00:04:49.863 SYMLINK libspdk_fsdev_aio.so 00:04:49.863 SO libspdk_sock_posix.so.6.0 00:04:49.863 SYMLINK libspdk_sock_posix.so 00:04:50.121 CC module/bdev/gpt/gpt.o 00:04:50.121 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:50.121 CC module/bdev/gpt/vbdev_gpt.o 00:04:50.121 CC module/bdev/lvol/vbdev_lvol.o 00:04:50.121 CC module/bdev/malloc/bdev_malloc.o 00:04:50.121 CC module/bdev/error/vbdev_error.o 00:04:50.121 CC module/bdev/error/vbdev_error_rpc.o 00:04:50.121 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:50.121 CC module/bdev/null/bdev_null.o 00:04:50.121 CC module/bdev/null/bdev_null_rpc.o 00:04:50.121 CC module/bdev/delay/vbdev_delay.o 00:04:50.121 CC module/bdev/raid/bdev_raid_rpc.o 00:04:50.121 CC module/bdev/raid/bdev_raid.o 00:04:50.121 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:50.121 CC module/bdev/ftl/bdev_ftl.o 00:04:50.121 CC module/bdev/raid/bdev_raid_sb.o 00:04:50.121 CC module/bdev/aio/bdev_aio.o 00:04:50.121 CC module/bdev/aio/bdev_aio_rpc.o 00:04:50.121 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:50.121 CC module/bdev/raid/raid0.o 00:04:50.121 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:50.121 CC module/bdev/raid/raid1.o 00:04:50.121 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:50.121 CC module/bdev/raid/concat.o 00:04:50.121 CC module/bdev/passthru/vbdev_passthru.o 00:04:50.121 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:50.121 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:50.121 CC module/bdev/nvme/bdev_nvme.o 00:04:50.121 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:50.121 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:50.121 CC module/bdev/split/vbdev_split.o 00:04:50.121 CC module/blobfs/bdev/blobfs_bdev.o 00:04:50.121 CC module/bdev/split/vbdev_split_rpc.o 00:04:50.121 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:50.121 CC module/bdev/nvme/nvme_rpc.o 00:04:50.121 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:50.122 CC module/bdev/nvme/bdev_mdns_client.o 00:04:50.122 CC module/bdev/nvme/vbdev_opal.o 00:04:50.122 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:50.122 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:50.122 CC module/bdev/iscsi/bdev_iscsi.o 00:04:50.122 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:50.380 LIB libspdk_blobfs_bdev.a 00:04:50.380 SO libspdk_blobfs_bdev.so.6.0 00:04:50.380 LIB libspdk_bdev_split.a 00:04:50.380 LIB libspdk_bdev_gpt.a 00:04:50.380 LIB libspdk_bdev_null.a 00:04:50.380 LIB libspdk_bdev_error.a 00:04:50.380 SO libspdk_bdev_split.so.6.0 00:04:50.380 SO libspdk_bdev_null.so.6.0 00:04:50.380 SO libspdk_bdev_gpt.so.6.0 00:04:50.380 SYMLINK libspdk_blobfs_bdev.so 00:04:50.380 SO libspdk_bdev_error.so.6.0 00:04:50.380 LIB libspdk_bdev_ftl.a 00:04:50.380 LIB libspdk_bdev_passthru.a 00:04:50.380 SYMLINK libspdk_bdev_split.so 00:04:50.380 SYMLINK libspdk_bdev_gpt.so 00:04:50.380 LIB libspdk_bdev_delay.a 00:04:50.380 LIB libspdk_bdev_zone_block.a 00:04:50.380 SO libspdk_bdev_ftl.so.6.0 00:04:50.380 SYMLINK libspdk_bdev_null.so 00:04:50.380 SYMLINK libspdk_bdev_error.so 00:04:50.380 LIB libspdk_bdev_malloc.a 00:04:50.380 LIB libspdk_bdev_aio.a 00:04:50.380 SO libspdk_bdev_passthru.so.6.0 00:04:50.380 LIB libspdk_bdev_iscsi.a 00:04:50.380 SO libspdk_bdev_delay.so.6.0 00:04:50.380 SO libspdk_bdev_zone_block.so.6.0 00:04:50.380 SO libspdk_bdev_aio.so.6.0 00:04:50.380 SO libspdk_bdev_malloc.so.6.0 00:04:50.380 SYMLINK libspdk_bdev_ftl.so 00:04:50.380 SO libspdk_bdev_iscsi.so.6.0 00:04:50.640 SYMLINK libspdk_bdev_passthru.so 00:04:50.640 LIB libspdk_bdev_lvol.a 00:04:50.640 SYMLINK libspdk_bdev_aio.so 00:04:50.640 SYMLINK libspdk_bdev_delay.so 00:04:50.640 SYMLINK libspdk_bdev_zone_block.so 00:04:50.640 SYMLINK libspdk_bdev_malloc.so 00:04:50.640 LIB libspdk_bdev_virtio.a 00:04:50.640 SO libspdk_bdev_lvol.so.6.0 00:04:50.640 SYMLINK libspdk_bdev_iscsi.so 00:04:50.640 SO libspdk_bdev_virtio.so.6.0 00:04:50.640 SYMLINK libspdk_bdev_lvol.so 00:04:50.640 SYMLINK libspdk_bdev_virtio.so 00:04:50.899 LIB libspdk_bdev_raid.a 00:04:50.899 SO libspdk_bdev_raid.so.6.0 00:04:50.899 SYMLINK libspdk_bdev_raid.so 00:04:51.837 LIB libspdk_bdev_nvme.a 00:04:51.837 SO libspdk_bdev_nvme.so.7.1 00:04:51.837 SYMLINK libspdk_bdev_nvme.so 00:04:52.776 CC module/event/subsystems/vmd/vmd.o 00:04:52.776 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:52.776 CC module/event/subsystems/iobuf/iobuf.o 00:04:52.776 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:52.776 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:52.776 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:52.776 CC module/event/subsystems/sock/sock.o 00:04:52.776 CC module/event/subsystems/fsdev/fsdev.o 00:04:52.776 CC module/event/subsystems/keyring/keyring.o 00:04:52.776 CC module/event/subsystems/scheduler/scheduler.o 00:04:52.776 LIB libspdk_event_scheduler.a 00:04:52.776 LIB libspdk_event_vmd.a 00:04:52.776 LIB libspdk_event_vfu_tgt.a 00:04:52.776 LIB libspdk_event_vhost_blk.a 00:04:52.776 LIB libspdk_event_fsdev.a 00:04:52.776 LIB libspdk_event_keyring.a 00:04:52.776 LIB libspdk_event_sock.a 00:04:52.776 LIB libspdk_event_iobuf.a 00:04:52.776 SO libspdk_event_scheduler.so.4.0 00:04:52.776 SO libspdk_event_vfu_tgt.so.3.0 00:04:52.776 SO libspdk_event_vmd.so.6.0 00:04:52.776 SO libspdk_event_iobuf.so.3.0 00:04:52.776 SO libspdk_event_vhost_blk.so.3.0 00:04:52.776 SO libspdk_event_fsdev.so.1.0 00:04:52.776 SO libspdk_event_keyring.so.1.0 00:04:52.776 SO libspdk_event_sock.so.5.0 00:04:52.776 SYMLINK libspdk_event_scheduler.so 00:04:52.776 SYMLINK libspdk_event_vfu_tgt.so 00:04:52.776 SYMLINK libspdk_event_vmd.so 00:04:52.776 SYMLINK libspdk_event_vhost_blk.so 00:04:52.776 SYMLINK libspdk_event_iobuf.so 00:04:52.776 SYMLINK libspdk_event_fsdev.so 00:04:52.776 SYMLINK libspdk_event_keyring.so 00:04:52.776 SYMLINK libspdk_event_sock.so 00:04:53.035 CC module/event/subsystems/accel/accel.o 00:04:53.296 LIB libspdk_event_accel.a 00:04:53.296 SO libspdk_event_accel.so.6.0 00:04:53.296 SYMLINK libspdk_event_accel.so 00:04:53.557 CC module/event/subsystems/bdev/bdev.o 00:04:53.818 LIB libspdk_event_bdev.a 00:04:53.818 SO libspdk_event_bdev.so.6.0 00:04:53.818 SYMLINK libspdk_event_bdev.so 00:04:54.388 CC module/event/subsystems/nbd/nbd.o 00:04:54.388 CC module/event/subsystems/scsi/scsi.o 00:04:54.388 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:54.388 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:54.388 CC module/event/subsystems/ublk/ublk.o 00:04:54.388 LIB libspdk_event_scsi.a 00:04:54.388 LIB libspdk_event_nbd.a 00:04:54.388 LIB libspdk_event_ublk.a 00:04:54.388 SO libspdk_event_scsi.so.6.0 00:04:54.388 SO libspdk_event_nbd.so.6.0 00:04:54.388 SO libspdk_event_ublk.so.3.0 00:04:54.388 LIB libspdk_event_nvmf.a 00:04:54.388 SYMLINK libspdk_event_scsi.so 00:04:54.388 SYMLINK libspdk_event_nbd.so 00:04:54.388 SO libspdk_event_nvmf.so.6.0 00:04:54.388 SYMLINK libspdk_event_ublk.so 00:04:54.649 SYMLINK libspdk_event_nvmf.so 00:04:54.649 CC module/event/subsystems/iscsi/iscsi.o 00:04:54.649 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:54.909 LIB libspdk_event_vhost_scsi.a 00:04:54.909 LIB libspdk_event_iscsi.a 00:04:54.909 SO libspdk_event_iscsi.so.6.0 00:04:54.910 SO libspdk_event_vhost_scsi.so.3.0 00:04:54.910 SYMLINK libspdk_event_vhost_scsi.so 00:04:54.910 SYMLINK libspdk_event_iscsi.so 00:04:55.170 SO libspdk.so.6.0 00:04:55.170 SYMLINK libspdk.so 00:04:55.430 CC app/spdk_top/spdk_top.o 00:04:55.430 CXX app/trace/trace.o 00:04:55.430 CC app/trace_record/trace_record.o 00:04:55.430 CC app/spdk_lspci/spdk_lspci.o 00:04:55.430 CC app/spdk_nvme_identify/identify.o 00:04:55.430 CC app/spdk_nvme_discover/discovery_aer.o 00:04:55.430 CC test/rpc_client/rpc_client_test.o 00:04:55.430 TEST_HEADER include/spdk/accel.h 00:04:55.430 TEST_HEADER include/spdk/accel_module.h 00:04:55.430 CC app/spdk_nvme_perf/perf.o 00:04:55.430 TEST_HEADER include/spdk/assert.h 00:04:55.430 TEST_HEADER include/spdk/barrier.h 00:04:55.430 TEST_HEADER include/spdk/bdev.h 00:04:55.430 TEST_HEADER include/spdk/base64.h 00:04:55.430 TEST_HEADER include/spdk/bit_array.h 00:04:55.430 TEST_HEADER include/spdk/bdev_module.h 00:04:55.430 TEST_HEADER include/spdk/bdev_zone.h 00:04:55.430 TEST_HEADER include/spdk/bit_pool.h 00:04:55.430 TEST_HEADER include/spdk/blob_bdev.h 00:04:55.430 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:55.430 TEST_HEADER include/spdk/blobfs.h 00:04:55.430 TEST_HEADER include/spdk/config.h 00:04:55.430 TEST_HEADER include/spdk/blob.h 00:04:55.430 TEST_HEADER include/spdk/conf.h 00:04:55.430 TEST_HEADER include/spdk/cpuset.h 00:04:55.430 TEST_HEADER include/spdk/crc16.h 00:04:55.430 TEST_HEADER include/spdk/crc64.h 00:04:55.430 TEST_HEADER include/spdk/crc32.h 00:04:55.430 TEST_HEADER include/spdk/dma.h 00:04:55.430 TEST_HEADER include/spdk/dif.h 00:04:55.430 TEST_HEADER include/spdk/env_dpdk.h 00:04:55.430 TEST_HEADER include/spdk/endian.h 00:04:55.430 TEST_HEADER include/spdk/env.h 00:04:55.430 TEST_HEADER include/spdk/event.h 00:04:55.430 TEST_HEADER include/spdk/fd_group.h 00:04:55.430 TEST_HEADER include/spdk/fd.h 00:04:55.430 TEST_HEADER include/spdk/fsdev.h 00:04:55.430 TEST_HEADER include/spdk/file.h 00:04:55.430 TEST_HEADER include/spdk/fsdev_module.h 00:04:55.430 TEST_HEADER include/spdk/ftl.h 00:04:55.430 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:55.430 TEST_HEADER include/spdk/hexlify.h 00:04:55.430 TEST_HEADER include/spdk/gpt_spec.h 00:04:55.430 TEST_HEADER include/spdk/histogram_data.h 00:04:55.430 TEST_HEADER include/spdk/idxd.h 00:04:55.430 TEST_HEADER include/spdk/idxd_spec.h 00:04:55.430 TEST_HEADER include/spdk/init.h 00:04:55.430 TEST_HEADER include/spdk/ioat_spec.h 00:04:55.430 TEST_HEADER include/spdk/ioat.h 00:04:55.430 TEST_HEADER include/spdk/iscsi_spec.h 00:04:55.701 CC app/nvmf_tgt/nvmf_main.o 00:04:55.701 CC app/iscsi_tgt/iscsi_tgt.o 00:04:55.701 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:55.701 TEST_HEADER include/spdk/json.h 00:04:55.701 TEST_HEADER include/spdk/jsonrpc.h 00:04:55.701 TEST_HEADER include/spdk/keyring.h 00:04:55.701 TEST_HEADER include/spdk/keyring_module.h 00:04:55.701 TEST_HEADER include/spdk/likely.h 00:04:55.701 TEST_HEADER include/spdk/log.h 00:04:55.701 TEST_HEADER include/spdk/lvol.h 00:04:55.701 TEST_HEADER include/spdk/memory.h 00:04:55.701 TEST_HEADER include/spdk/mmio.h 00:04:55.701 TEST_HEADER include/spdk/md5.h 00:04:55.701 TEST_HEADER include/spdk/nbd.h 00:04:55.701 TEST_HEADER include/spdk/notify.h 00:04:55.701 TEST_HEADER include/spdk/net.h 00:04:55.701 TEST_HEADER include/spdk/nvme.h 00:04:55.701 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:55.701 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:55.701 TEST_HEADER include/spdk/nvme_spec.h 00:04:55.701 TEST_HEADER include/spdk/nvme_intel.h 00:04:55.701 CC app/spdk_dd/spdk_dd.o 00:04:55.701 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:55.701 TEST_HEADER include/spdk/nvmf_spec.h 00:04:55.701 TEST_HEADER include/spdk/nvme_zns.h 00:04:55.701 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:55.701 TEST_HEADER include/spdk/nvmf.h 00:04:55.701 TEST_HEADER include/spdk/nvmf_transport.h 00:04:55.701 TEST_HEADER include/spdk/opal_spec.h 00:04:55.701 TEST_HEADER include/spdk/opal.h 00:04:55.701 TEST_HEADER include/spdk/pci_ids.h 00:04:55.701 TEST_HEADER include/spdk/pipe.h 00:04:55.701 TEST_HEADER include/spdk/queue.h 00:04:55.701 TEST_HEADER include/spdk/scheduler.h 00:04:55.701 TEST_HEADER include/spdk/reduce.h 00:04:55.701 TEST_HEADER include/spdk/rpc.h 00:04:55.701 TEST_HEADER include/spdk/sock.h 00:04:55.701 TEST_HEADER include/spdk/scsi.h 00:04:55.701 TEST_HEADER include/spdk/scsi_spec.h 00:04:55.701 TEST_HEADER include/spdk/string.h 00:04:55.701 TEST_HEADER include/spdk/stdinc.h 00:04:55.701 TEST_HEADER include/spdk/trace.h 00:04:55.701 TEST_HEADER include/spdk/thread.h 00:04:55.701 TEST_HEADER include/spdk/trace_parser.h 00:04:55.701 TEST_HEADER include/spdk/ublk.h 00:04:55.701 TEST_HEADER include/spdk/tree.h 00:04:55.701 TEST_HEADER include/spdk/util.h 00:04:55.701 TEST_HEADER include/spdk/version.h 00:04:55.701 TEST_HEADER include/spdk/uuid.h 00:04:55.701 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:55.701 CC app/spdk_tgt/spdk_tgt.o 00:04:55.701 TEST_HEADER include/spdk/vmd.h 00:04:55.701 TEST_HEADER include/spdk/vhost.h 00:04:55.701 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:55.701 TEST_HEADER include/spdk/zipf.h 00:04:55.701 TEST_HEADER include/spdk/xor.h 00:04:55.701 CXX test/cpp_headers/accel.o 00:04:55.701 CXX test/cpp_headers/accel_module.o 00:04:55.701 CXX test/cpp_headers/barrier.o 00:04:55.701 CXX test/cpp_headers/assert.o 00:04:55.701 CXX test/cpp_headers/bdev_module.o 00:04:55.701 CXX test/cpp_headers/base64.o 00:04:55.701 CXX test/cpp_headers/bdev_zone.o 00:04:55.701 CXX test/cpp_headers/bdev.o 00:04:55.701 CXX test/cpp_headers/bit_array.o 00:04:55.701 CXX test/cpp_headers/blob_bdev.o 00:04:55.701 CXX test/cpp_headers/bit_pool.o 00:04:55.701 CXX test/cpp_headers/blobfs_bdev.o 00:04:55.701 CXX test/cpp_headers/blob.o 00:04:55.701 CXX test/cpp_headers/blobfs.o 00:04:55.701 CXX test/cpp_headers/conf.o 00:04:55.701 CXX test/cpp_headers/crc16.o 00:04:55.701 CXX test/cpp_headers/config.o 00:04:55.701 CXX test/cpp_headers/cpuset.o 00:04:55.701 CXX test/cpp_headers/crc32.o 00:04:55.701 CXX test/cpp_headers/crc64.o 00:04:55.701 CXX test/cpp_headers/dif.o 00:04:55.701 CXX test/cpp_headers/dma.o 00:04:55.701 CXX test/cpp_headers/env_dpdk.o 00:04:55.701 CXX test/cpp_headers/endian.o 00:04:55.701 CXX test/cpp_headers/event.o 00:04:55.701 CXX test/cpp_headers/env.o 00:04:55.701 CXX test/cpp_headers/fd_group.o 00:04:55.701 CXX test/cpp_headers/fd.o 00:04:55.701 CXX test/cpp_headers/fsdev.o 00:04:55.701 CXX test/cpp_headers/fsdev_module.o 00:04:55.701 CXX test/cpp_headers/ftl.o 00:04:55.701 CXX test/cpp_headers/file.o 00:04:55.701 CXX test/cpp_headers/fuse_dispatcher.o 00:04:55.701 CXX test/cpp_headers/gpt_spec.o 00:04:55.701 CXX test/cpp_headers/hexlify.o 00:04:55.701 CXX test/cpp_headers/histogram_data.o 00:04:55.701 CXX test/cpp_headers/idxd.o 00:04:55.701 CXX test/cpp_headers/idxd_spec.o 00:04:55.701 CXX test/cpp_headers/init.o 00:04:55.701 CXX test/cpp_headers/ioat_spec.o 00:04:55.701 CXX test/cpp_headers/ioat.o 00:04:55.701 CXX test/cpp_headers/json.o 00:04:55.701 CXX test/cpp_headers/iscsi_spec.o 00:04:55.701 CXX test/cpp_headers/keyring.o 00:04:55.701 CXX test/cpp_headers/jsonrpc.o 00:04:55.701 CXX test/cpp_headers/keyring_module.o 00:04:55.701 CXX test/cpp_headers/likely.o 00:04:55.701 CXX test/cpp_headers/log.o 00:04:55.702 CXX test/cpp_headers/lvol.o 00:04:55.702 CXX test/cpp_headers/md5.o 00:04:55.702 CXX test/cpp_headers/memory.o 00:04:55.702 CXX test/cpp_headers/nbd.o 00:04:55.702 CXX test/cpp_headers/mmio.o 00:04:55.702 CXX test/cpp_headers/net.o 00:04:55.702 CXX test/cpp_headers/notify.o 00:04:55.702 CXX test/cpp_headers/nvme.o 00:04:55.702 CXX test/cpp_headers/nvme_intel.o 00:04:55.702 CXX test/cpp_headers/nvme_ocssd.o 00:04:55.702 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:55.702 CXX test/cpp_headers/nvme_spec.o 00:04:55.702 CXX test/cpp_headers/nvme_zns.o 00:04:55.702 CXX test/cpp_headers/nvmf_cmd.o 00:04:55.702 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:55.702 CXX test/cpp_headers/nvmf.o 00:04:55.702 CXX test/cpp_headers/nvmf_transport.o 00:04:55.702 CXX test/cpp_headers/nvmf_spec.o 00:04:55.702 CXX test/cpp_headers/opal.o 00:04:55.702 CXX test/cpp_headers/pci_ids.o 00:04:55.702 CXX test/cpp_headers/opal_spec.o 00:04:55.702 CXX test/cpp_headers/queue.o 00:04:55.702 CC examples/util/zipf/zipf.o 00:04:55.702 CXX test/cpp_headers/pipe.o 00:04:55.702 CXX test/cpp_headers/reduce.o 00:04:55.702 CXX test/cpp_headers/rpc.o 00:04:55.702 CXX test/cpp_headers/scheduler.o 00:04:55.702 CXX test/cpp_headers/scsi.o 00:04:55.702 CXX test/cpp_headers/scsi_spec.o 00:04:55.702 CXX test/cpp_headers/sock.o 00:04:55.702 CXX test/cpp_headers/stdinc.o 00:04:55.702 CXX test/cpp_headers/string.o 00:04:55.702 CC examples/ioat/perf/perf.o 00:04:55.702 CXX test/cpp_headers/thread.o 00:04:55.702 CXX test/cpp_headers/trace_parser.o 00:04:55.702 CXX test/cpp_headers/trace.o 00:04:55.702 CC examples/ioat/verify/verify.o 00:04:55.702 CXX test/cpp_headers/tree.o 00:04:55.702 CC test/thread/poller_perf/poller_perf.o 00:04:55.702 CC app/fio/nvme/fio_plugin.o 00:04:55.702 CC test/app/histogram_perf/histogram_perf.o 00:04:55.702 CC test/env/vtophys/vtophys.o 00:04:55.702 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:55.702 CC test/env/pci/pci_ut.o 00:04:55.702 CXX test/cpp_headers/ublk.o 00:04:55.702 CC test/env/memory/memory_ut.o 00:04:55.702 CC test/app/stub/stub.o 00:04:55.702 CC test/app/bdev_svc/bdev_svc.o 00:04:55.992 CC test/dma/test_dma/test_dma.o 00:04:55.992 CXX test/cpp_headers/util.o 00:04:55.992 LINK spdk_lspci 00:04:55.992 CC app/fio/bdev/fio_plugin.o 00:04:55.992 CC test/app/jsoncat/jsoncat.o 00:04:56.256 LINK spdk_nvme_discover 00:04:56.256 LINK rpc_client_test 00:04:56.256 LINK iscsi_tgt 00:04:56.256 LINK nvmf_tgt 00:04:56.256 CC test/env/mem_callbacks/mem_callbacks.o 00:04:56.256 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:56.256 LINK interrupt_tgt 00:04:56.256 CXX test/cpp_headers/uuid.o 00:04:56.256 CXX test/cpp_headers/version.o 00:04:56.256 LINK vtophys 00:04:56.256 CXX test/cpp_headers/vfio_user_pci.o 00:04:56.256 CXX test/cpp_headers/vfio_user_spec.o 00:04:56.256 CXX test/cpp_headers/vhost.o 00:04:56.256 CXX test/cpp_headers/vmd.o 00:04:56.256 LINK poller_perf 00:04:56.256 CXX test/cpp_headers/xor.o 00:04:56.256 CXX test/cpp_headers/zipf.o 00:04:56.256 LINK env_dpdk_post_init 00:04:56.256 LINK spdk_tgt 00:04:56.517 LINK zipf 00:04:56.517 LINK bdev_svc 00:04:56.517 LINK stub 00:04:56.517 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:56.517 LINK spdk_trace_record 00:04:56.517 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:56.517 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:56.517 LINK jsoncat 00:04:56.517 LINK histogram_perf 00:04:56.517 LINK spdk_trace 00:04:56.517 LINK ioat_perf 00:04:56.517 LINK verify 00:04:56.517 LINK spdk_dd 00:04:56.776 LINK pci_ut 00:04:56.776 LINK nvme_fuzz 00:04:56.776 LINK test_dma 00:04:56.776 CC examples/idxd/perf/perf.o 00:04:56.776 CC examples/vmd/lsvmd/lsvmd.o 00:04:56.776 CC examples/sock/hello_world/hello_sock.o 00:04:56.776 CC examples/vmd/led/led.o 00:04:56.776 CC test/event/reactor_perf/reactor_perf.o 00:04:56.776 LINK vhost_fuzz 00:04:56.776 CC test/event/reactor/reactor.o 00:04:56.776 CC test/event/event_perf/event_perf.o 00:04:56.776 CC app/vhost/vhost.o 00:04:56.776 LINK spdk_top 00:04:57.035 CC examples/thread/thread/thread_ex.o 00:04:57.035 CC test/event/app_repeat/app_repeat.o 00:04:57.035 LINK spdk_nvme 00:04:57.035 LINK spdk_bdev 00:04:57.035 CC test/event/scheduler/scheduler.o 00:04:57.035 LINK spdk_nvme_identify 00:04:57.035 LINK mem_callbacks 00:04:57.035 LINK spdk_nvme_perf 00:04:57.035 LINK lsvmd 00:04:57.035 LINK reactor 00:04:57.035 LINK led 00:04:57.035 LINK reactor_perf 00:04:57.035 LINK event_perf 00:04:57.035 LINK app_repeat 00:04:57.035 LINK hello_sock 00:04:57.035 LINK vhost 00:04:57.035 LINK idxd_perf 00:04:57.035 LINK thread 00:04:57.035 LINK scheduler 00:04:57.294 LINK memory_ut 00:04:57.294 CC test/nvme/aer/aer.o 00:04:57.294 CC test/nvme/compliance/nvme_compliance.o 00:04:57.294 CC test/nvme/err_injection/err_injection.o 00:04:57.294 CC test/nvme/cuse/cuse.o 00:04:57.294 CC test/nvme/startup/startup.o 00:04:57.294 CC test/nvme/reset/reset.o 00:04:57.294 CC test/nvme/overhead/overhead.o 00:04:57.294 CC test/nvme/simple_copy/simple_copy.o 00:04:57.294 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:57.294 CC test/nvme/boot_partition/boot_partition.o 00:04:57.294 CC test/nvme/sgl/sgl.o 00:04:57.294 CC test/nvme/fused_ordering/fused_ordering.o 00:04:57.294 CC test/nvme/fdp/fdp.o 00:04:57.294 CC test/nvme/e2edp/nvme_dp.o 00:04:57.294 CC test/nvme/connect_stress/connect_stress.o 00:04:57.294 CC test/nvme/reserve/reserve.o 00:04:57.294 CC test/accel/dif/dif.o 00:04:57.294 CC test/blobfs/mkfs/mkfs.o 00:04:57.553 CC test/lvol/esnap/esnap.o 00:04:57.553 CC examples/nvme/abort/abort.o 00:04:57.553 CC examples/nvme/hello_world/hello_world.o 00:04:57.553 CC examples/nvme/hotplug/hotplug.o 00:04:57.553 CC examples/nvme/reconnect/reconnect.o 00:04:57.553 LINK doorbell_aers 00:04:57.553 LINK err_injection 00:04:57.553 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:57.553 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:57.553 LINK boot_partition 00:04:57.553 CC examples/nvme/arbitration/arbitration.o 00:04:57.553 LINK connect_stress 00:04:57.553 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:57.553 LINK startup 00:04:57.553 LINK simple_copy 00:04:57.553 LINK reserve 00:04:57.553 LINK mkfs 00:04:57.553 LINK fused_ordering 00:04:57.553 LINK reset 00:04:57.553 LINK sgl 00:04:57.553 LINK overhead 00:04:57.553 LINK aer 00:04:57.553 CC examples/accel/perf/accel_perf.o 00:04:57.553 LINK nvme_dp 00:04:57.553 LINK nvme_compliance 00:04:57.553 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:57.553 LINK fdp 00:04:57.812 CC examples/blob/hello_world/hello_blob.o 00:04:57.812 CC examples/blob/cli/blobcli.o 00:04:57.812 LINK pmr_persistence 00:04:57.812 LINK cmb_copy 00:04:57.812 LINK hello_world 00:04:57.812 LINK iscsi_fuzz 00:04:57.812 LINK hotplug 00:04:57.812 LINK arbitration 00:04:57.812 LINK abort 00:04:57.812 LINK reconnect 00:04:57.812 LINK dif 00:04:57.812 LINK hello_blob 00:04:57.812 LINK hello_fsdev 00:04:58.071 LINK nvme_manage 00:04:58.071 LINK accel_perf 00:04:58.071 LINK blobcli 00:04:58.331 LINK cuse 00:04:58.331 CC test/bdev/bdevio/bdevio.o 00:04:58.591 CC examples/bdev/hello_world/hello_bdev.o 00:04:58.591 CC examples/bdev/bdevperf/bdevperf.o 00:04:58.591 LINK hello_bdev 00:04:58.849 LINK bdevio 00:04:59.109 LINK bdevperf 00:04:59.679 CC examples/nvmf/nvmf/nvmf.o 00:04:59.679 LINK nvmf 00:05:01.060 LINK esnap 00:05:01.060 00:05:01.060 real 0m52.685s 00:05:01.060 user 7m56.050s 00:05:01.060 sys 3m53.924s 00:05:01.060 20:24:54 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:01.060 20:24:54 make -- common/autotest_common.sh@10 -- $ set +x 00:05:01.060 ************************************ 00:05:01.060 END TEST make 00:05:01.060 ************************************ 00:05:01.060 20:24:54 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:01.060 20:24:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:01.060 20:24:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:01.060 20:24:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.060 20:24:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:01.060 20:24:54 -- pm/common@44 -- $ pid=71459 00:05:01.060 20:24:54 -- pm/common@50 -- $ kill -TERM 71459 00:05:01.060 20:24:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.060 20:24:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:01.060 20:24:54 -- pm/common@44 -- $ pid=71460 00:05:01.060 20:24:54 -- pm/common@50 -- $ kill -TERM 71460 00:05:01.060 20:24:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.060 20:24:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:01.060 20:24:54 -- pm/common@44 -- $ pid=71463 00:05:01.060 20:24:54 -- pm/common@50 -- $ kill -TERM 71463 00:05:01.060 20:24:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.060 20:24:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:01.060 20:24:54 -- pm/common@44 -- $ pid=71486 00:05:01.060 20:24:54 -- pm/common@50 -- $ sudo -E kill -TERM 71486 00:05:01.060 20:24:54 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:01.060 20:24:54 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:01.321 20:24:54 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:01.321 20:24:54 -- common/autotest_common.sh@1711 -- # lcov --version 00:05:01.321 20:24:54 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:01.321 20:24:54 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:01.321 20:24:54 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.321 20:24:54 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.321 20:24:54 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.321 20:24:54 -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.321 20:24:54 -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.321 20:24:54 -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.321 20:24:54 -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.321 20:24:54 -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.321 20:24:54 -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.321 20:24:54 -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.321 20:24:54 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.321 20:24:54 -- scripts/common.sh@344 -- # case "$op" in 00:05:01.321 20:24:54 -- scripts/common.sh@345 -- # : 1 00:05:01.321 20:24:54 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.321 20:24:54 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.321 20:24:54 -- scripts/common.sh@365 -- # decimal 1 00:05:01.321 20:24:54 -- scripts/common.sh@353 -- # local d=1 00:05:01.321 20:24:54 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.321 20:24:54 -- scripts/common.sh@355 -- # echo 1 00:05:01.321 20:24:54 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.321 20:24:54 -- scripts/common.sh@366 -- # decimal 2 00:05:01.321 20:24:54 -- scripts/common.sh@353 -- # local d=2 00:05:01.321 20:24:54 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.321 20:24:54 -- scripts/common.sh@355 -- # echo 2 00:05:01.321 20:24:54 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.321 20:24:54 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.321 20:24:54 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.321 20:24:54 -- scripts/common.sh@368 -- # return 0 00:05:01.321 20:24:54 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.321 20:24:54 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:01.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.322 --rc genhtml_branch_coverage=1 00:05:01.322 --rc genhtml_function_coverage=1 00:05:01.322 --rc genhtml_legend=1 00:05:01.322 --rc geninfo_all_blocks=1 00:05:01.322 --rc geninfo_unexecuted_blocks=1 00:05:01.322 00:05:01.322 ' 00:05:01.322 20:24:54 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:01.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.322 --rc genhtml_branch_coverage=1 00:05:01.322 --rc genhtml_function_coverage=1 00:05:01.322 --rc genhtml_legend=1 00:05:01.322 --rc geninfo_all_blocks=1 00:05:01.322 --rc geninfo_unexecuted_blocks=1 00:05:01.322 00:05:01.322 ' 00:05:01.322 20:24:54 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:01.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.322 --rc genhtml_branch_coverage=1 00:05:01.322 --rc genhtml_function_coverage=1 00:05:01.322 --rc genhtml_legend=1 00:05:01.322 --rc geninfo_all_blocks=1 00:05:01.322 --rc geninfo_unexecuted_blocks=1 00:05:01.322 00:05:01.322 ' 00:05:01.322 20:24:54 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:01.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.322 --rc genhtml_branch_coverage=1 00:05:01.322 --rc genhtml_function_coverage=1 00:05:01.322 --rc genhtml_legend=1 00:05:01.322 --rc geninfo_all_blocks=1 00:05:01.322 --rc geninfo_unexecuted_blocks=1 00:05:01.322 00:05:01.322 ' 00:05:01.322 20:24:54 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:01.322 20:24:54 -- nvmf/common.sh@7 -- # uname -s 00:05:01.322 20:24:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:01.322 20:24:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:01.322 20:24:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:01.322 20:24:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:01.322 20:24:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:01.322 20:24:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:01.322 20:24:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:01.322 20:24:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:01.322 20:24:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:01.322 20:24:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:01.322 20:24:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:05:01.322 20:24:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:05:01.322 20:24:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:01.322 20:24:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:01.322 20:24:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:01.322 20:24:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:01.322 20:24:54 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:01.322 20:24:54 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:01.322 20:24:54 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:01.322 20:24:54 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:01.322 20:24:54 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:01.322 20:24:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.322 20:24:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.322 20:24:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.322 20:24:54 -- paths/export.sh@5 -- # export PATH 00:05:01.322 20:24:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.322 20:24:54 -- nvmf/common.sh@51 -- # : 0 00:05:01.322 20:24:54 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:01.322 20:24:54 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:01.322 20:24:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:01.322 20:24:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:01.322 20:24:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:01.322 20:24:54 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:01.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:01.322 20:24:54 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:01.322 20:24:54 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:01.322 20:24:54 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:01.322 20:24:54 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:01.322 20:24:54 -- spdk/autotest.sh@32 -- # uname -s 00:05:01.322 20:24:54 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:01.322 20:24:54 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:01.322 20:24:54 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:01.322 20:24:54 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:01.322 20:24:54 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:01.322 20:24:54 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:01.322 20:24:54 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:01.322 20:24:54 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:01.322 20:24:54 -- spdk/autotest.sh@48 -- # udevadm_pid=135729 00:05:01.322 20:24:54 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:01.322 20:24:54 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:01.322 20:24:54 -- pm/common@17 -- # local monitor 00:05:01.322 20:24:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.322 20:24:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.322 20:24:54 -- pm/common@21 -- # date +%s 00:05:01.322 20:24:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.322 20:24:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.322 20:24:54 -- pm/common@21 -- # date +%s 00:05:01.322 20:24:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733426694 00:05:01.322 20:24:54 -- pm/common@21 -- # date +%s 00:05:01.322 20:24:54 -- pm/common@25 -- # sleep 1 00:05:01.322 20:24:54 -- pm/common@21 -- # date +%s 00:05:01.322 20:24:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733426694 00:05:01.322 20:24:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733426694 00:05:01.322 20:24:54 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733426694 00:05:01.322 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733426694_collect-vmstat.pm.log 00:05:01.322 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733426694_collect-cpu-load.pm.log 00:05:01.322 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733426694_collect-cpu-temp.pm.log 00:05:01.583 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733426694_collect-bmc-pm.bmc.pm.log 00:05:02.519 20:24:55 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:02.519 20:24:55 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:02.519 20:24:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:02.519 20:24:55 -- common/autotest_common.sh@10 -- # set +x 00:05:02.519 20:24:55 -- spdk/autotest.sh@59 -- # create_test_list 00:05:02.519 20:24:55 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:02.519 20:24:55 -- common/autotest_common.sh@10 -- # set +x 00:05:02.519 20:24:55 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:05:02.519 20:24:55 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:02.519 20:24:55 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:02.519 20:24:55 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:02.519 20:24:55 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:02.519 20:24:55 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:02.519 20:24:55 -- common/autotest_common.sh@1457 -- # uname 00:05:02.519 20:24:55 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:02.519 20:24:55 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:02.519 20:24:55 -- common/autotest_common.sh@1477 -- # uname 00:05:02.519 20:24:55 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:02.519 20:24:55 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:02.519 20:24:55 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:02.519 lcov: LCOV version 1.15 00:05:02.519 20:24:55 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:14.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:14.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:26.961 20:25:19 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:26.961 20:25:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:26.961 20:25:19 -- common/autotest_common.sh@10 -- # set +x 00:05:26.961 20:25:19 -- spdk/autotest.sh@78 -- # rm -f 00:05:26.961 20:25:19 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:28.864 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:05:28.864 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:05:28.864 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:05:28.864 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:05:28.864 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:05:28.864 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:05:28.864 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:05:28.864 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:05:28.864 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:05:28.864 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:05:28.864 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:05:28.864 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:05:29.123 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:05:29.123 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:05:29.123 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:05:29.123 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:05:29.123 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:05:29.123 20:25:22 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:29.123 20:25:22 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:29.123 20:25:22 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:29.123 20:25:22 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:29.123 20:25:22 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:29.123 20:25:22 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:29.123 20:25:22 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:29.123 20:25:22 -- common/autotest_common.sh@1669 -- # bdf=0000:86:00.0 00:05:29.123 20:25:22 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:29.123 20:25:22 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:29.123 20:25:22 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:29.123 20:25:22 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:29.123 20:25:22 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:29.123 20:25:22 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:29.123 20:25:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:29.123 20:25:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:29.124 20:25:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:29.124 20:25:22 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:29.124 20:25:22 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:29.124 No valid GPT data, bailing 00:05:29.124 20:25:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:29.124 20:25:22 -- scripts/common.sh@394 -- # pt= 00:05:29.124 20:25:22 -- scripts/common.sh@395 -- # return 1 00:05:29.124 20:25:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:29.124 1+0 records in 00:05:29.124 1+0 records out 00:05:29.124 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00449853 s, 233 MB/s 00:05:29.124 20:25:22 -- spdk/autotest.sh@105 -- # sync 00:05:29.124 20:25:22 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:29.124 20:25:22 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:29.124 20:25:22 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:35.699 20:25:28 -- spdk/autotest.sh@111 -- # uname -s 00:05:35.699 20:25:28 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:35.699 20:25:28 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:35.699 20:25:28 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:38.238 Hugepages 00:05:38.238 node hugesize free / total 00:05:38.238 node0 1048576kB 0 / 0 00:05:38.238 node0 2048kB 0 / 0 00:05:38.238 node1 1048576kB 0 / 0 00:05:38.238 node1 2048kB 0 / 0 00:05:38.238 00:05:38.238 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:38.238 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:38.238 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:38.238 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:38.238 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:38.238 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:38.238 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:38.238 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:38.238 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:38.238 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:38.238 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:38.238 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:38.238 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:38.238 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:38.238 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:38.238 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:38.238 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:38.238 NVMe 0000:86:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:38.238 20:25:31 -- spdk/autotest.sh@117 -- # uname -s 00:05:38.238 20:25:31 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:38.238 20:25:31 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:38.238 20:25:31 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:41.528 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:41.528 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:41.528 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:41.528 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:41.528 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:41.528 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:41.528 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:41.528 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:41.528 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:41.528 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:41.528 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:41.528 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:41.528 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:41.528 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:41.528 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:41.528 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:42.095 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:05:42.355 20:25:35 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:43.292 20:25:36 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:43.293 20:25:36 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:43.293 20:25:36 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:43.293 20:25:36 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:43.293 20:25:36 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:43.293 20:25:36 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:43.293 20:25:36 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:43.293 20:25:36 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:43.293 20:25:36 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:43.293 20:25:36 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:43.293 20:25:36 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:86:00.0 00:05:43.293 20:25:36 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:46.585 Waiting for block devices as requested 00:05:46.585 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:05:46.585 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:46.585 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:46.585 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:46.585 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:46.585 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:46.585 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:46.585 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:46.845 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:46.845 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:46.845 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:47.103 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:47.103 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:47.103 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:47.363 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:47.363 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:47.363 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:47.363 20:25:40 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:47.363 20:25:40 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:86:00.0 00:05:47.363 20:25:40 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:47.363 20:25:40 -- common/autotest_common.sh@1487 -- # grep 0000:86:00.0/nvme/nvme 00:05:47.363 20:25:40 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 00:05:47.363 20:25:40 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 ]] 00:05:47.363 20:25:40 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 00:05:47.623 20:25:40 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:47.624 20:25:40 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:47.624 20:25:40 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:47.624 20:25:40 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:47.624 20:25:40 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:47.624 20:25:40 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:47.624 20:25:40 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:05:47.624 20:25:40 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:47.624 20:25:40 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:47.624 20:25:40 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:47.624 20:25:40 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:47.624 20:25:40 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:47.624 20:25:40 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:47.624 20:25:40 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:47.624 20:25:40 -- common/autotest_common.sh@1543 -- # continue 00:05:47.624 20:25:40 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:47.624 20:25:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:47.624 20:25:40 -- common/autotest_common.sh@10 -- # set +x 00:05:47.624 20:25:40 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:47.624 20:25:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:47.624 20:25:40 -- common/autotest_common.sh@10 -- # set +x 00:05:47.624 20:25:40 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:50.921 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:50.921 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:50.921 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:50.921 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:50.921 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:50.921 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:50.921 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:50.921 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:50.921 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:50.921 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:50.921 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:50.921 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:50.921 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:50.921 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:50.921 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:50.921 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:51.491 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:05:51.491 20:25:44 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:51.491 20:25:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:51.491 20:25:44 -- common/autotest_common.sh@10 -- # set +x 00:05:51.491 20:25:44 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:51.491 20:25:44 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:51.491 20:25:44 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:51.491 20:25:44 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:51.491 20:25:44 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:51.491 20:25:44 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:51.491 20:25:44 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:51.491 20:25:44 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:51.491 20:25:44 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:51.491 20:25:44 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:51.491 20:25:44 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:51.491 20:25:44 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:51.491 20:25:44 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:51.751 20:25:44 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:51.751 20:25:44 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:86:00.0 00:05:51.751 20:25:44 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:51.751 20:25:44 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:86:00.0/device 00:05:51.751 20:25:44 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:51.751 20:25:44 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:51.751 20:25:44 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:51.751 20:25:44 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:05:51.751 20:25:44 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:86:00.0 00:05:51.751 20:25:44 -- common/autotest_common.sh@1579 -- # [[ -z 0000:86:00.0 ]] 00:05:51.751 20:25:44 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=150941 00:05:51.751 20:25:44 -- common/autotest_common.sh@1585 -- # waitforlisten 150941 00:05:51.751 20:25:44 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.751 20:25:44 -- common/autotest_common.sh@835 -- # '[' -z 150941 ']' 00:05:51.751 20:25:44 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.751 20:25:44 -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.751 20:25:44 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.751 20:25:44 -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.751 20:25:44 -- common/autotest_common.sh@10 -- # set +x 00:05:51.751 [2024-12-05 20:25:45.056759] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:05:51.751 [2024-12-05 20:25:45.056810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150941 ] 00:05:51.751 [2024-12-05 20:25:45.130564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.751 [2024-12-05 20:25:45.171352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.688 20:25:45 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.688 20:25:45 -- common/autotest_common.sh@868 -- # return 0 00:05:52.688 20:25:45 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:05:52.688 20:25:45 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:52.688 20:25:45 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:86:00.0 00:05:55.976 nvme0n1 00:05:55.976 20:25:48 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:55.976 [2024-12-05 20:25:48.999650] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:55.976 request: 00:05:55.976 { 00:05:55.976 "nvme_ctrlr_name": "nvme0", 00:05:55.976 "password": "test", 00:05:55.976 "method": "bdev_nvme_opal_revert", 00:05:55.976 "req_id": 1 00:05:55.976 } 00:05:55.976 Got JSON-RPC error response 00:05:55.976 response: 00:05:55.976 { 00:05:55.976 "code": -32602, 00:05:55.976 "message": "Invalid parameters" 00:05:55.976 } 00:05:55.976 20:25:49 -- common/autotest_common.sh@1591 -- # true 00:05:55.976 20:25:49 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:55.976 20:25:49 -- common/autotest_common.sh@1595 -- # killprocess 150941 00:05:55.976 20:25:49 -- common/autotest_common.sh@954 -- # '[' -z 150941 ']' 00:05:55.976 20:25:49 -- common/autotest_common.sh@958 -- # kill -0 150941 00:05:55.976 20:25:49 -- common/autotest_common.sh@959 -- # uname 00:05:55.976 20:25:49 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.976 20:25:49 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 150941 00:05:55.976 20:25:49 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.976 20:25:49 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.976 20:25:49 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 150941' 00:05:55.976 killing process with pid 150941 00:05:55.976 20:25:49 -- common/autotest_common.sh@973 -- # kill 150941 00:05:55.976 20:25:49 -- common/autotest_common.sh@978 -- # wait 150941 00:05:57.355 20:25:50 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:57.355 20:25:50 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:57.355 20:25:50 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:57.355 20:25:50 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:57.355 20:25:50 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:57.355 20:25:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:57.355 20:25:50 -- common/autotest_common.sh@10 -- # set +x 00:05:57.355 20:25:50 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:57.355 20:25:50 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:57.355 20:25:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.355 20:25:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.355 20:25:50 -- common/autotest_common.sh@10 -- # set +x 00:05:57.355 ************************************ 00:05:57.355 START TEST env 00:05:57.355 ************************************ 00:05:57.355 20:25:50 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:57.615 * Looking for test storage... 00:05:57.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:57.615 20:25:50 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:57.615 20:25:50 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:57.615 20:25:50 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:57.615 20:25:50 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:57.615 20:25:50 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.615 20:25:50 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.615 20:25:50 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.615 20:25:50 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.615 20:25:50 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.615 20:25:50 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.615 20:25:50 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.615 20:25:50 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.615 20:25:50 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.615 20:25:50 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.615 20:25:50 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.615 20:25:50 env -- scripts/common.sh@344 -- # case "$op" in 00:05:57.615 20:25:50 env -- scripts/common.sh@345 -- # : 1 00:05:57.615 20:25:50 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.615 20:25:50 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.615 20:25:50 env -- scripts/common.sh@365 -- # decimal 1 00:05:57.615 20:25:50 env -- scripts/common.sh@353 -- # local d=1 00:05:57.615 20:25:50 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.615 20:25:50 env -- scripts/common.sh@355 -- # echo 1 00:05:57.615 20:25:50 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.615 20:25:50 env -- scripts/common.sh@366 -- # decimal 2 00:05:57.615 20:25:50 env -- scripts/common.sh@353 -- # local d=2 00:05:57.615 20:25:50 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.615 20:25:50 env -- scripts/common.sh@355 -- # echo 2 00:05:57.615 20:25:50 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.615 20:25:50 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.615 20:25:50 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.615 20:25:50 env -- scripts/common.sh@368 -- # return 0 00:05:57.615 20:25:50 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.615 20:25:50 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:57.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.615 --rc genhtml_branch_coverage=1 00:05:57.615 --rc genhtml_function_coverage=1 00:05:57.615 --rc genhtml_legend=1 00:05:57.615 --rc geninfo_all_blocks=1 00:05:57.615 --rc geninfo_unexecuted_blocks=1 00:05:57.615 00:05:57.615 ' 00:05:57.615 20:25:50 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:57.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.615 --rc genhtml_branch_coverage=1 00:05:57.615 --rc genhtml_function_coverage=1 00:05:57.615 --rc genhtml_legend=1 00:05:57.615 --rc geninfo_all_blocks=1 00:05:57.615 --rc geninfo_unexecuted_blocks=1 00:05:57.615 00:05:57.615 ' 00:05:57.615 20:25:50 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:57.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.615 --rc genhtml_branch_coverage=1 00:05:57.615 --rc genhtml_function_coverage=1 00:05:57.615 --rc genhtml_legend=1 00:05:57.615 --rc geninfo_all_blocks=1 00:05:57.615 --rc geninfo_unexecuted_blocks=1 00:05:57.615 00:05:57.615 ' 00:05:57.615 20:25:50 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:57.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.615 --rc genhtml_branch_coverage=1 00:05:57.615 --rc genhtml_function_coverage=1 00:05:57.615 --rc genhtml_legend=1 00:05:57.615 --rc geninfo_all_blocks=1 00:05:57.615 --rc geninfo_unexecuted_blocks=1 00:05:57.615 00:05:57.615 ' 00:05:57.615 20:25:50 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:57.615 20:25:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.615 20:25:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.615 20:25:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:57.616 ************************************ 00:05:57.616 START TEST env_memory 00:05:57.616 ************************************ 00:05:57.616 20:25:50 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:57.616 00:05:57.616 00:05:57.616 CUnit - A unit testing framework for C - Version 2.1-3 00:05:57.616 http://cunit.sourceforge.net/ 00:05:57.616 00:05:57.616 00:05:57.616 Suite: memory 00:05:57.616 Test: alloc and free memory map ...[2024-12-05 20:25:50.996026] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:57.616 passed 00:05:57.616 Test: mem map translation ...[2024-12-05 20:25:51.013060] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:57.616 [2024-12-05 20:25:51.013072] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:57.616 [2024-12-05 20:25:51.013102] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:57.616 [2024-12-05 20:25:51.013108] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:57.616 passed 00:05:57.616 Test: mem map registration ...[2024-12-05 20:25:51.047141] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:57.616 [2024-12-05 20:25:51.047154] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:57.876 passed 00:05:57.876 Test: mem map adjacent registrations ...passed 00:05:57.876 00:05:57.876 Run Summary: Type Total Ran Passed Failed Inactive 00:05:57.876 suites 1 1 n/a 0 0 00:05:57.876 tests 4 4 4 0 0 00:05:57.876 asserts 152 152 152 0 n/a 00:05:57.876 00:05:57.877 Elapsed time = 0.114 seconds 00:05:57.877 00:05:57.877 real 0m0.122s 00:05:57.877 user 0m0.113s 00:05:57.877 sys 0m0.008s 00:05:57.877 20:25:51 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.877 20:25:51 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:57.877 ************************************ 00:05:57.877 END TEST env_memory 00:05:57.877 ************************************ 00:05:57.877 20:25:51 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:57.877 20:25:51 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.877 20:25:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.877 20:25:51 env -- common/autotest_common.sh@10 -- # set +x 00:05:57.877 ************************************ 00:05:57.877 START TEST env_vtophys 00:05:57.877 ************************************ 00:05:57.877 20:25:51 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:57.877 EAL: lib.eal log level changed from notice to debug 00:05:57.877 EAL: Detected lcore 0 as core 0 on socket 0 00:05:57.877 EAL: Detected lcore 1 as core 1 on socket 0 00:05:57.877 EAL: Detected lcore 2 as core 2 on socket 0 00:05:57.877 EAL: Detected lcore 3 as core 3 on socket 0 00:05:57.877 EAL: Detected lcore 4 as core 4 on socket 0 00:05:57.877 EAL: Detected lcore 5 as core 5 on socket 0 00:05:57.877 EAL: Detected lcore 6 as core 6 on socket 0 00:05:57.877 EAL: Detected lcore 7 as core 8 on socket 0 00:05:57.877 EAL: Detected lcore 8 as core 9 on socket 0 00:05:57.877 EAL: Detected lcore 9 as core 10 on socket 0 00:05:57.877 EAL: Detected lcore 10 as core 11 on socket 0 00:05:57.877 EAL: Detected lcore 11 as core 12 on socket 0 00:05:57.877 EAL: Detected lcore 12 as core 13 on socket 0 00:05:57.877 EAL: Detected lcore 13 as core 14 on socket 0 00:05:57.877 EAL: Detected lcore 14 as core 16 on socket 0 00:05:57.877 EAL: Detected lcore 15 as core 17 on socket 0 00:05:57.877 EAL: Detected lcore 16 as core 18 on socket 0 00:05:57.877 EAL: Detected lcore 17 as core 19 on socket 0 00:05:57.877 EAL: Detected lcore 18 as core 20 on socket 0 00:05:57.877 EAL: Detected lcore 19 as core 21 on socket 0 00:05:57.877 EAL: Detected lcore 20 as core 22 on socket 0 00:05:57.877 EAL: Detected lcore 21 as core 24 on socket 0 00:05:57.877 EAL: Detected lcore 22 as core 25 on socket 0 00:05:57.877 EAL: Detected lcore 23 as core 26 on socket 0 00:05:57.877 EAL: Detected lcore 24 as core 27 on socket 0 00:05:57.877 EAL: Detected lcore 25 as core 28 on socket 0 00:05:57.877 EAL: Detected lcore 26 as core 29 on socket 0 00:05:57.877 EAL: Detected lcore 27 as core 30 on socket 0 00:05:57.877 EAL: Detected lcore 28 as core 0 on socket 1 00:05:57.877 EAL: Detected lcore 29 as core 1 on socket 1 00:05:57.877 EAL: Detected lcore 30 as core 2 on socket 1 00:05:57.877 EAL: Detected lcore 31 as core 3 on socket 1 00:05:57.877 EAL: Detected lcore 32 as core 4 on socket 1 00:05:57.877 EAL: Detected lcore 33 as core 5 on socket 1 00:05:57.877 EAL: Detected lcore 34 as core 6 on socket 1 00:05:57.877 EAL: Detected lcore 35 as core 8 on socket 1 00:05:57.877 EAL: Detected lcore 36 as core 9 on socket 1 00:05:57.877 EAL: Detected lcore 37 as core 10 on socket 1 00:05:57.877 EAL: Detected lcore 38 as core 11 on socket 1 00:05:57.877 EAL: Detected lcore 39 as core 12 on socket 1 00:05:57.877 EAL: Detected lcore 40 as core 13 on socket 1 00:05:57.877 EAL: Detected lcore 41 as core 14 on socket 1 00:05:57.877 EAL: Detected lcore 42 as core 16 on socket 1 00:05:57.877 EAL: Detected lcore 43 as core 17 on socket 1 00:05:57.877 EAL: Detected lcore 44 as core 18 on socket 1 00:05:57.877 EAL: Detected lcore 45 as core 19 on socket 1 00:05:57.877 EAL: Detected lcore 46 as core 20 on socket 1 00:05:57.877 EAL: Detected lcore 47 as core 21 on socket 1 00:05:57.877 EAL: Detected lcore 48 as core 22 on socket 1 00:05:57.877 EAL: Detected lcore 49 as core 24 on socket 1 00:05:57.877 EAL: Detected lcore 50 as core 25 on socket 1 00:05:57.877 EAL: Detected lcore 51 as core 26 on socket 1 00:05:57.877 EAL: Detected lcore 52 as core 27 on socket 1 00:05:57.877 EAL: Detected lcore 53 as core 28 on socket 1 00:05:57.877 EAL: Detected lcore 54 as core 29 on socket 1 00:05:57.877 EAL: Detected lcore 55 as core 30 on socket 1 00:05:57.877 EAL: Detected lcore 56 as core 0 on socket 0 00:05:57.877 EAL: Detected lcore 57 as core 1 on socket 0 00:05:57.877 EAL: Detected lcore 58 as core 2 on socket 0 00:05:57.877 EAL: Detected lcore 59 as core 3 on socket 0 00:05:57.877 EAL: Detected lcore 60 as core 4 on socket 0 00:05:57.877 EAL: Detected lcore 61 as core 5 on socket 0 00:05:57.877 EAL: Detected lcore 62 as core 6 on socket 0 00:05:57.877 EAL: Detected lcore 63 as core 8 on socket 0 00:05:57.877 EAL: Detected lcore 64 as core 9 on socket 0 00:05:57.877 EAL: Detected lcore 65 as core 10 on socket 0 00:05:57.877 EAL: Detected lcore 66 as core 11 on socket 0 00:05:57.877 EAL: Detected lcore 67 as core 12 on socket 0 00:05:57.877 EAL: Detected lcore 68 as core 13 on socket 0 00:05:57.877 EAL: Detected lcore 69 as core 14 on socket 0 00:05:57.877 EAL: Detected lcore 70 as core 16 on socket 0 00:05:57.877 EAL: Detected lcore 71 as core 17 on socket 0 00:05:57.877 EAL: Detected lcore 72 as core 18 on socket 0 00:05:57.877 EAL: Detected lcore 73 as core 19 on socket 0 00:05:57.877 EAL: Detected lcore 74 as core 20 on socket 0 00:05:57.877 EAL: Detected lcore 75 as core 21 on socket 0 00:05:57.877 EAL: Detected lcore 76 as core 22 on socket 0 00:05:57.877 EAL: Detected lcore 77 as core 24 on socket 0 00:05:57.877 EAL: Detected lcore 78 as core 25 on socket 0 00:05:57.877 EAL: Detected lcore 79 as core 26 on socket 0 00:05:57.877 EAL: Detected lcore 80 as core 27 on socket 0 00:05:57.877 EAL: Detected lcore 81 as core 28 on socket 0 00:05:57.877 EAL: Detected lcore 82 as core 29 on socket 0 00:05:57.877 EAL: Detected lcore 83 as core 30 on socket 0 00:05:57.877 EAL: Detected lcore 84 as core 0 on socket 1 00:05:57.877 EAL: Detected lcore 85 as core 1 on socket 1 00:05:57.877 EAL: Detected lcore 86 as core 2 on socket 1 00:05:57.877 EAL: Detected lcore 87 as core 3 on socket 1 00:05:57.877 EAL: Detected lcore 88 as core 4 on socket 1 00:05:57.877 EAL: Detected lcore 89 as core 5 on socket 1 00:05:57.877 EAL: Detected lcore 90 as core 6 on socket 1 00:05:57.877 EAL: Detected lcore 91 as core 8 on socket 1 00:05:57.877 EAL: Detected lcore 92 as core 9 on socket 1 00:05:57.877 EAL: Detected lcore 93 as core 10 on socket 1 00:05:57.877 EAL: Detected lcore 94 as core 11 on socket 1 00:05:57.877 EAL: Detected lcore 95 as core 12 on socket 1 00:05:57.877 EAL: Detected lcore 96 as core 13 on socket 1 00:05:57.877 EAL: Detected lcore 97 as core 14 on socket 1 00:05:57.877 EAL: Detected lcore 98 as core 16 on socket 1 00:05:57.877 EAL: Detected lcore 99 as core 17 on socket 1 00:05:57.877 EAL: Detected lcore 100 as core 18 on socket 1 00:05:57.877 EAL: Detected lcore 101 as core 19 on socket 1 00:05:57.877 EAL: Detected lcore 102 as core 20 on socket 1 00:05:57.877 EAL: Detected lcore 103 as core 21 on socket 1 00:05:57.877 EAL: Detected lcore 104 as core 22 on socket 1 00:05:57.877 EAL: Detected lcore 105 as core 24 on socket 1 00:05:57.877 EAL: Detected lcore 106 as core 25 on socket 1 00:05:57.877 EAL: Detected lcore 107 as core 26 on socket 1 00:05:57.877 EAL: Detected lcore 108 as core 27 on socket 1 00:05:57.877 EAL: Detected lcore 109 as core 28 on socket 1 00:05:57.877 EAL: Detected lcore 110 as core 29 on socket 1 00:05:57.877 EAL: Detected lcore 111 as core 30 on socket 1 00:05:57.877 EAL: Maximum logical cores by configuration: 128 00:05:57.877 EAL: Detected CPU lcores: 112 00:05:57.877 EAL: Detected NUMA nodes: 2 00:05:57.877 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:57.877 EAL: Detected shared linkage of DPDK 00:05:57.877 EAL: No shared files mode enabled, IPC will be disabled 00:05:57.877 EAL: Bus pci wants IOVA as 'DC' 00:05:57.877 EAL: Buses did not request a specific IOVA mode. 00:05:57.877 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:57.877 EAL: Selected IOVA mode 'VA' 00:05:57.877 EAL: Probing VFIO support... 00:05:57.877 EAL: IOMMU type 1 (Type 1) is supported 00:05:57.877 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:57.877 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:57.877 EAL: VFIO support initialized 00:05:57.877 EAL: Ask a virtual area of 0x2e000 bytes 00:05:57.877 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:57.877 EAL: Setting up physically contiguous memory... 00:05:57.877 EAL: Setting maximum number of open files to 524288 00:05:57.877 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:57.877 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:57.877 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:57.877 EAL: Ask a virtual area of 0x61000 bytes 00:05:57.877 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:57.877 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:57.877 EAL: Ask a virtual area of 0x400000000 bytes 00:05:57.877 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:57.877 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:57.877 EAL: Ask a virtual area of 0x61000 bytes 00:05:57.877 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:57.877 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:57.877 EAL: Ask a virtual area of 0x400000000 bytes 00:05:57.877 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:57.877 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:57.877 EAL: Ask a virtual area of 0x61000 bytes 00:05:57.877 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:57.877 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:57.877 EAL: Ask a virtual area of 0x400000000 bytes 00:05:57.877 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:57.877 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:57.877 EAL: Ask a virtual area of 0x61000 bytes 00:05:57.877 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:57.877 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:57.877 EAL: Ask a virtual area of 0x400000000 bytes 00:05:57.877 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:57.877 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:57.877 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:57.877 EAL: Ask a virtual area of 0x61000 bytes 00:05:57.877 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:57.877 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:57.877 EAL: Ask a virtual area of 0x400000000 bytes 00:05:57.877 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:57.877 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:57.878 EAL: Ask a virtual area of 0x61000 bytes 00:05:57.878 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:57.878 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:57.878 EAL: Ask a virtual area of 0x400000000 bytes 00:05:57.878 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:57.878 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:57.878 EAL: Ask a virtual area of 0x61000 bytes 00:05:57.878 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:57.878 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:57.878 EAL: Ask a virtual area of 0x400000000 bytes 00:05:57.878 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:57.878 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:57.878 EAL: Ask a virtual area of 0x61000 bytes 00:05:57.878 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:57.878 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:57.878 EAL: Ask a virtual area of 0x400000000 bytes 00:05:57.878 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:57.878 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:57.878 EAL: Hugepages will be freed exactly as allocated. 00:05:57.878 EAL: No shared files mode enabled, IPC is disabled 00:05:57.878 EAL: No shared files mode enabled, IPC is disabled 00:05:57.878 EAL: TSC frequency is ~2200000 KHz 00:05:57.878 EAL: Main lcore 0 is ready (tid=7f6f402d5a00;cpuset=[0]) 00:05:57.878 EAL: Trying to obtain current memory policy. 00:05:57.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:57.878 EAL: Restoring previous memory policy: 0 00:05:57.878 EAL: request: mp_malloc_sync 00:05:57.878 EAL: No shared files mode enabled, IPC is disabled 00:05:57.878 EAL: Heap on socket 0 was expanded by 2MB 00:05:57.878 EAL: No shared files mode enabled, IPC is disabled 00:05:57.878 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:57.878 EAL: Mem event callback 'spdk:(nil)' registered 00:05:57.878 00:05:57.878 00:05:57.878 CUnit - A unit testing framework for C - Version 2.1-3 00:05:57.878 http://cunit.sourceforge.net/ 00:05:57.878 00:05:57.878 00:05:57.878 Suite: components_suite 00:05:57.878 Test: vtophys_malloc_test ...passed 00:05:57.878 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:57.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:57.878 EAL: Restoring previous memory policy: 4 00:05:57.878 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.878 EAL: request: mp_malloc_sync 00:05:57.878 EAL: No shared files mode enabled, IPC is disabled 00:05:57.878 EAL: Heap on socket 0 was expanded by 4MB 00:05:57.878 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.878 EAL: request: mp_malloc_sync 00:05:57.878 EAL: No shared files mode enabled, IPC is disabled 00:05:57.878 EAL: Heap on socket 0 was shrunk by 4MB 00:05:57.878 EAL: Trying to obtain current memory policy. 00:05:57.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:57.878 EAL: Restoring previous memory policy: 4 00:05:57.878 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.878 EAL: request: mp_malloc_sync 00:05:57.878 EAL: No shared files mode enabled, IPC is disabled 00:05:57.878 EAL: Heap on socket 0 was expanded by 6MB 00:05:57.878 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.878 EAL: request: mp_malloc_sync 00:05:57.878 EAL: No shared files mode enabled, IPC is disabled 00:05:57.878 EAL: Heap on socket 0 was shrunk by 6MB 00:05:57.878 EAL: Trying to obtain current memory policy. 00:05:57.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:57.878 EAL: Restoring previous memory policy: 4 00:05:57.878 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.878 EAL: request: mp_malloc_sync 00:05:57.878 EAL: No shared files mode enabled, IPC is disabled 00:05:57.878 EAL: Heap on socket 0 was expanded by 10MB 00:05:57.878 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.878 EAL: request: mp_malloc_sync 00:05:57.878 EAL: No shared files mode enabled, IPC is disabled 00:05:57.878 EAL: Heap on socket 0 was shrunk by 10MB 00:05:57.878 EAL: Trying to obtain current memory policy. 00:05:57.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:57.878 EAL: Restoring previous memory policy: 4 00:05:57.878 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.878 EAL: request: mp_malloc_sync 00:05:57.878 EAL: No shared files mode enabled, IPC is disabled 00:05:57.878 EAL: Heap on socket 0 was expanded by 18MB 00:05:57.878 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.878 EAL: request: mp_malloc_sync 00:05:57.878 EAL: No shared files mode enabled, IPC is disabled 00:05:57.878 EAL: Heap on socket 0 was shrunk by 18MB 00:05:57.878 EAL: Trying to obtain current memory policy. 00:05:57.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:57.878 EAL: Restoring previous memory policy: 4 00:05:57.878 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.878 EAL: request: mp_malloc_sync 00:05:57.878 EAL: No shared files mode enabled, IPC is disabled 00:05:57.878 EAL: Heap on socket 0 was expanded by 34MB 00:05:57.878 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.878 EAL: request: mp_malloc_sync 00:05:57.878 EAL: No shared files mode enabled, IPC is disabled 00:05:57.878 EAL: Heap on socket 0 was shrunk by 34MB 00:05:57.878 EAL: Trying to obtain current memory policy. 00:05:57.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:57.878 EAL: Restoring previous memory policy: 4 00:05:57.878 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.878 EAL: request: mp_malloc_sync 00:05:57.878 EAL: No shared files mode enabled, IPC is disabled 00:05:57.878 EAL: Heap on socket 0 was expanded by 66MB 00:05:57.878 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.878 EAL: request: mp_malloc_sync 00:05:57.878 EAL: No shared files mode enabled, IPC is disabled 00:05:57.878 EAL: Heap on socket 0 was shrunk by 66MB 00:05:57.878 EAL: Trying to obtain current memory policy. 00:05:57.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.138 EAL: Restoring previous memory policy: 4 00:05:58.138 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.138 EAL: request: mp_malloc_sync 00:05:58.138 EAL: No shared files mode enabled, IPC is disabled 00:05:58.138 EAL: Heap on socket 0 was expanded by 130MB 00:05:58.138 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.138 EAL: request: mp_malloc_sync 00:05:58.138 EAL: No shared files mode enabled, IPC is disabled 00:05:58.138 EAL: Heap on socket 0 was shrunk by 130MB 00:05:58.138 EAL: Trying to obtain current memory policy. 00:05:58.138 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.138 EAL: Restoring previous memory policy: 4 00:05:58.138 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.138 EAL: request: mp_malloc_sync 00:05:58.138 EAL: No shared files mode enabled, IPC is disabled 00:05:58.138 EAL: Heap on socket 0 was expanded by 258MB 00:05:58.138 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.138 EAL: request: mp_malloc_sync 00:05:58.138 EAL: No shared files mode enabled, IPC is disabled 00:05:58.138 EAL: Heap on socket 0 was shrunk by 258MB 00:05:58.138 EAL: Trying to obtain current memory policy. 00:05:58.138 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.138 EAL: Restoring previous memory policy: 4 00:05:58.138 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.138 EAL: request: mp_malloc_sync 00:05:58.138 EAL: No shared files mode enabled, IPC is disabled 00:05:58.138 EAL: Heap on socket 0 was expanded by 514MB 00:05:58.397 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.397 EAL: request: mp_malloc_sync 00:05:58.397 EAL: No shared files mode enabled, IPC is disabled 00:05:58.397 EAL: Heap on socket 0 was shrunk by 514MB 00:05:58.397 EAL: Trying to obtain current memory policy. 00:05:58.397 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.657 EAL: Restoring previous memory policy: 4 00:05:58.657 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.657 EAL: request: mp_malloc_sync 00:05:58.657 EAL: No shared files mode enabled, IPC is disabled 00:05:58.657 EAL: Heap on socket 0 was expanded by 1026MB 00:05:58.657 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.917 EAL: request: mp_malloc_sync 00:05:58.917 EAL: No shared files mode enabled, IPC is disabled 00:05:58.917 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:58.917 passed 00:05:58.917 00:05:58.917 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.917 suites 1 1 n/a 0 0 00:05:58.917 tests 2 2 2 0 0 00:05:58.917 asserts 497 497 497 0 n/a 00:05:58.917 00:05:58.917 Elapsed time = 0.959 seconds 00:05:58.917 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.917 EAL: request: mp_malloc_sync 00:05:58.917 EAL: No shared files mode enabled, IPC is disabled 00:05:58.917 EAL: Heap on socket 0 was shrunk by 2MB 00:05:58.917 EAL: No shared files mode enabled, IPC is disabled 00:05:58.917 EAL: No shared files mode enabled, IPC is disabled 00:05:58.917 EAL: No shared files mode enabled, IPC is disabled 00:05:58.917 00:05:58.917 real 0m1.090s 00:05:58.917 user 0m0.641s 00:05:58.917 sys 0m0.420s 00:05:58.917 20:25:52 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.917 20:25:52 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:58.917 ************************************ 00:05:58.917 END TEST env_vtophys 00:05:58.917 ************************************ 00:05:58.917 20:25:52 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:58.917 20:25:52 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.917 20:25:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.917 20:25:52 env -- common/autotest_common.sh@10 -- # set +x 00:05:58.917 ************************************ 00:05:58.917 START TEST env_pci 00:05:58.917 ************************************ 00:05:58.917 20:25:52 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:58.917 00:05:58.917 00:05:58.917 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.917 http://cunit.sourceforge.net/ 00:05:58.917 00:05:58.917 00:05:58.917 Suite: pci 00:05:58.917 Test: pci_hook ...[2024-12-05 20:25:52.331186] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 152450 has claimed it 00:05:59.176 EAL: Cannot find device (10000:00:01.0) 00:05:59.176 EAL: Failed to attach device on primary process 00:05:59.176 passed 00:05:59.176 00:05:59.176 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.176 suites 1 1 n/a 0 0 00:05:59.176 tests 1 1 1 0 0 00:05:59.176 asserts 25 25 25 0 n/a 00:05:59.176 00:05:59.176 Elapsed time = 0.027 seconds 00:05:59.176 00:05:59.176 real 0m0.042s 00:05:59.176 user 0m0.012s 00:05:59.176 sys 0m0.030s 00:05:59.176 20:25:52 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.176 20:25:52 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:59.176 ************************************ 00:05:59.176 END TEST env_pci 00:05:59.176 ************************************ 00:05:59.176 20:25:52 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:59.176 20:25:52 env -- env/env.sh@15 -- # uname 00:05:59.176 20:25:52 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:59.176 20:25:52 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:59.176 20:25:52 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:59.176 20:25:52 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:59.176 20:25:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.176 20:25:52 env -- common/autotest_common.sh@10 -- # set +x 00:05:59.176 ************************************ 00:05:59.176 START TEST env_dpdk_post_init 00:05:59.176 ************************************ 00:05:59.176 20:25:52 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:59.176 EAL: Detected CPU lcores: 112 00:05:59.176 EAL: Detected NUMA nodes: 2 00:05:59.176 EAL: Detected shared linkage of DPDK 00:05:59.176 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:59.176 EAL: Selected IOVA mode 'VA' 00:05:59.176 EAL: VFIO support initialized 00:05:59.176 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:59.176 EAL: Using IOMMU type 1 (Type 1) 00:05:59.176 EAL: Ignore mapping IO port bar(1) 00:05:59.176 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:59.176 EAL: Ignore mapping IO port bar(1) 00:05:59.176 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:59.176 EAL: Ignore mapping IO port bar(1) 00:05:59.176 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:59.176 EAL: Ignore mapping IO port bar(1) 00:05:59.176 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:59.176 EAL: Ignore mapping IO port bar(1) 00:05:59.176 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:59.447 EAL: Ignore mapping IO port bar(1) 00:05:59.447 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:59.447 EAL: Ignore mapping IO port bar(1) 00:05:59.447 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:59.447 EAL: Ignore mapping IO port bar(1) 00:05:59.447 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:59.447 EAL: Ignore mapping IO port bar(1) 00:05:59.447 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:59.447 EAL: Ignore mapping IO port bar(1) 00:05:59.447 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:59.447 EAL: Ignore mapping IO port bar(1) 00:05:59.447 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:59.447 EAL: Ignore mapping IO port bar(1) 00:05:59.447 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:59.447 EAL: Ignore mapping IO port bar(1) 00:05:59.447 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:59.447 EAL: Ignore mapping IO port bar(1) 00:05:59.447 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:59.447 EAL: Ignore mapping IO port bar(1) 00:05:59.447 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:59.447 EAL: Ignore mapping IO port bar(1) 00:05:59.447 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:06:00.390 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:86:00.0 (socket 1) 00:06:03.678 EAL: Releasing PCI mapped resource for 0000:86:00.0 00:06:03.678 EAL: Calling pci_unmap_resource for 0000:86:00.0 at 0x202001040000 00:06:03.678 Starting DPDK initialization... 00:06:03.678 Starting SPDK post initialization... 00:06:03.678 SPDK NVMe probe 00:06:03.678 Attaching to 0000:86:00.0 00:06:03.678 Attached to 0000:86:00.0 00:06:03.678 Cleaning up... 00:06:03.678 00:06:03.678 real 0m4.451s 00:06:03.678 user 0m3.036s 00:06:03.678 sys 0m0.465s 00:06:03.678 20:25:56 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.678 20:25:56 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:03.678 ************************************ 00:06:03.678 END TEST env_dpdk_post_init 00:06:03.678 ************************************ 00:06:03.678 20:25:56 env -- env/env.sh@26 -- # uname 00:06:03.678 20:25:56 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:03.678 20:25:56 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:03.678 20:25:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.678 20:25:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.678 20:25:56 env -- common/autotest_common.sh@10 -- # set +x 00:06:03.678 ************************************ 00:06:03.678 START TEST env_mem_callbacks 00:06:03.678 ************************************ 00:06:03.678 20:25:56 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:03.678 EAL: Detected CPU lcores: 112 00:06:03.678 EAL: Detected NUMA nodes: 2 00:06:03.678 EAL: Detected shared linkage of DPDK 00:06:03.678 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:03.678 EAL: Selected IOVA mode 'VA' 00:06:03.678 EAL: VFIO support initialized 00:06:03.678 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:03.678 00:06:03.678 00:06:03.678 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.678 http://cunit.sourceforge.net/ 00:06:03.678 00:06:03.678 00:06:03.678 Suite: memory 00:06:03.678 Test: test ... 00:06:03.678 register 0x200000200000 2097152 00:06:03.678 malloc 3145728 00:06:03.678 register 0x200000400000 4194304 00:06:03.678 buf 0x200000500000 len 3145728 PASSED 00:06:03.678 malloc 64 00:06:03.678 buf 0x2000004fff40 len 64 PASSED 00:06:03.678 malloc 4194304 00:06:03.678 register 0x200000800000 6291456 00:06:03.678 buf 0x200000a00000 len 4194304 PASSED 00:06:03.678 free 0x200000500000 3145728 00:06:03.678 free 0x2000004fff40 64 00:06:03.678 unregister 0x200000400000 4194304 PASSED 00:06:03.678 free 0x200000a00000 4194304 00:06:03.678 unregister 0x200000800000 6291456 PASSED 00:06:03.678 malloc 8388608 00:06:03.678 register 0x200000400000 10485760 00:06:03.678 buf 0x200000600000 len 8388608 PASSED 00:06:03.678 free 0x200000600000 8388608 00:06:03.678 unregister 0x200000400000 10485760 PASSED 00:06:03.678 passed 00:06:03.678 00:06:03.678 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.678 suites 1 1 n/a 0 0 00:06:03.678 tests 1 1 1 0 0 00:06:03.678 asserts 15 15 15 0 n/a 00:06:03.678 00:06:03.678 Elapsed time = 0.008 seconds 00:06:03.678 00:06:03.678 real 0m0.059s 00:06:03.678 user 0m0.021s 00:06:03.678 sys 0m0.038s 00:06:03.678 20:25:57 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.678 20:25:57 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:03.678 ************************************ 00:06:03.678 END TEST env_mem_callbacks 00:06:03.678 ************************************ 00:06:03.678 00:06:03.678 real 0m6.287s 00:06:03.678 user 0m4.061s 00:06:03.678 sys 0m1.280s 00:06:03.678 20:25:57 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.678 20:25:57 env -- common/autotest_common.sh@10 -- # set +x 00:06:03.678 ************************************ 00:06:03.678 END TEST env 00:06:03.678 ************************************ 00:06:03.678 20:25:57 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:03.678 20:25:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.678 20:25:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.678 20:25:57 -- common/autotest_common.sh@10 -- # set +x 00:06:03.939 ************************************ 00:06:03.939 START TEST rpc 00:06:03.939 ************************************ 00:06:03.939 20:25:57 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:03.939 * Looking for test storage... 00:06:03.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:03.939 20:25:57 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:03.939 20:25:57 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:03.939 20:25:57 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:03.939 20:25:57 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:03.939 20:25:57 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.939 20:25:57 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.939 20:25:57 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.939 20:25:57 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.939 20:25:57 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.939 20:25:57 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.939 20:25:57 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.939 20:25:57 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.939 20:25:57 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.939 20:25:57 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.939 20:25:57 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.939 20:25:57 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:03.939 20:25:57 rpc -- scripts/common.sh@345 -- # : 1 00:06:03.939 20:25:57 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.939 20:25:57 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.939 20:25:57 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:03.939 20:25:57 rpc -- scripts/common.sh@353 -- # local d=1 00:06:03.939 20:25:57 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.939 20:25:57 rpc -- scripts/common.sh@355 -- # echo 1 00:06:03.939 20:25:57 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.939 20:25:57 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:03.939 20:25:57 rpc -- scripts/common.sh@353 -- # local d=2 00:06:03.939 20:25:57 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.939 20:25:57 rpc -- scripts/common.sh@355 -- # echo 2 00:06:03.939 20:25:57 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.939 20:25:57 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.939 20:25:57 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.939 20:25:57 rpc -- scripts/common.sh@368 -- # return 0 00:06:03.939 20:25:57 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.939 20:25:57 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:03.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.939 --rc genhtml_branch_coverage=1 00:06:03.939 --rc genhtml_function_coverage=1 00:06:03.939 --rc genhtml_legend=1 00:06:03.939 --rc geninfo_all_blocks=1 00:06:03.939 --rc geninfo_unexecuted_blocks=1 00:06:03.939 00:06:03.939 ' 00:06:03.939 20:25:57 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:03.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.939 --rc genhtml_branch_coverage=1 00:06:03.939 --rc genhtml_function_coverage=1 00:06:03.939 --rc genhtml_legend=1 00:06:03.939 --rc geninfo_all_blocks=1 00:06:03.939 --rc geninfo_unexecuted_blocks=1 00:06:03.939 00:06:03.939 ' 00:06:03.939 20:25:57 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:03.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.939 --rc genhtml_branch_coverage=1 00:06:03.939 --rc genhtml_function_coverage=1 00:06:03.939 --rc genhtml_legend=1 00:06:03.939 --rc geninfo_all_blocks=1 00:06:03.939 --rc geninfo_unexecuted_blocks=1 00:06:03.939 00:06:03.939 ' 00:06:03.939 20:25:57 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:03.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.939 --rc genhtml_branch_coverage=1 00:06:03.939 --rc genhtml_function_coverage=1 00:06:03.939 --rc genhtml_legend=1 00:06:03.939 --rc geninfo_all_blocks=1 00:06:03.939 --rc geninfo_unexecuted_blocks=1 00:06:03.939 00:06:03.939 ' 00:06:03.939 20:25:57 rpc -- rpc/rpc.sh@65 -- # spdk_pid=153381 00:06:03.939 20:25:57 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.939 20:25:57 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:03.939 20:25:57 rpc -- rpc/rpc.sh@67 -- # waitforlisten 153381 00:06:03.939 20:25:57 rpc -- common/autotest_common.sh@835 -- # '[' -z 153381 ']' 00:06:03.939 20:25:57 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.939 20:25:57 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.939 20:25:57 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.939 20:25:57 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.939 20:25:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.939 [2024-12-05 20:25:57.347646] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:06:03.940 [2024-12-05 20:25:57.347688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153381 ] 00:06:04.199 [2024-12-05 20:25:57.417423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.199 [2024-12-05 20:25:57.453753] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:04.199 [2024-12-05 20:25:57.453786] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 153381' to capture a snapshot of events at runtime. 00:06:04.199 [2024-12-05 20:25:57.453792] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:04.199 [2024-12-05 20:25:57.453797] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:04.199 [2024-12-05 20:25:57.453802] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid153381 for offline analysis/debug. 00:06:04.199 [2024-12-05 20:25:57.454353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.459 20:25:57 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.460 20:25:57 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:04.460 20:25:57 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:04.460 20:25:57 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:04.460 20:25:57 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:04.460 20:25:57 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:04.460 20:25:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.460 20:25:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.460 20:25:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.460 ************************************ 00:06:04.460 START TEST rpc_integrity 00:06:04.460 ************************************ 00:06:04.460 20:25:57 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:04.460 20:25:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:04.460 20:25:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.460 20:25:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.460 20:25:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.460 20:25:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:04.460 20:25:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:04.460 20:25:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:04.460 20:25:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:04.460 20:25:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.460 20:25:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.460 20:25:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.460 20:25:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:04.460 20:25:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:04.460 20:25:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.460 20:25:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.460 20:25:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.460 20:25:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:04.460 { 00:06:04.460 "name": "Malloc0", 00:06:04.460 "aliases": [ 00:06:04.460 "d5fcb634-fa12-457a-952a-bc2496c181c6" 00:06:04.460 ], 00:06:04.460 "product_name": "Malloc disk", 00:06:04.460 "block_size": 512, 00:06:04.460 "num_blocks": 16384, 00:06:04.460 "uuid": "d5fcb634-fa12-457a-952a-bc2496c181c6", 00:06:04.460 "assigned_rate_limits": { 00:06:04.460 "rw_ios_per_sec": 0, 00:06:04.460 "rw_mbytes_per_sec": 0, 00:06:04.460 "r_mbytes_per_sec": 0, 00:06:04.460 "w_mbytes_per_sec": 0 00:06:04.460 }, 00:06:04.460 "claimed": false, 00:06:04.460 "zoned": false, 00:06:04.460 "supported_io_types": { 00:06:04.460 "read": true, 00:06:04.460 "write": true, 00:06:04.460 "unmap": true, 00:06:04.460 "flush": true, 00:06:04.460 "reset": true, 00:06:04.460 "nvme_admin": false, 00:06:04.460 "nvme_io": false, 00:06:04.460 "nvme_io_md": false, 00:06:04.460 "write_zeroes": true, 00:06:04.460 "zcopy": true, 00:06:04.460 "get_zone_info": false, 00:06:04.460 "zone_management": false, 00:06:04.460 "zone_append": false, 00:06:04.460 "compare": false, 00:06:04.460 "compare_and_write": false, 00:06:04.460 "abort": true, 00:06:04.460 "seek_hole": false, 00:06:04.460 "seek_data": false, 00:06:04.460 "copy": true, 00:06:04.460 "nvme_iov_md": false 00:06:04.460 }, 00:06:04.460 "memory_domains": [ 00:06:04.460 { 00:06:04.460 "dma_device_id": "system", 00:06:04.460 "dma_device_type": 1 00:06:04.460 }, 00:06:04.460 { 00:06:04.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.460 "dma_device_type": 2 00:06:04.460 } 00:06:04.460 ], 00:06:04.460 "driver_specific": {} 00:06:04.460 } 00:06:04.460 ]' 00:06:04.460 20:25:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:04.460 20:25:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:04.460 20:25:57 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:04.460 20:25:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.460 20:25:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.460 [2024-12-05 20:25:57.849076] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:04.460 [2024-12-05 20:25:57.849103] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:04.460 [2024-12-05 20:25:57.849115] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12b15c0 00:06:04.460 [2024-12-05 20:25:57.849120] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:04.460 [2024-12-05 20:25:57.850129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:04.460 [2024-12-05 20:25:57.850147] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:04.460 Passthru0 00:06:04.460 20:25:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.460 20:25:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:04.460 20:25:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.460 20:25:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.460 20:25:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.460 20:25:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:04.460 { 00:06:04.460 "name": "Malloc0", 00:06:04.460 "aliases": [ 00:06:04.460 "d5fcb634-fa12-457a-952a-bc2496c181c6" 00:06:04.460 ], 00:06:04.460 "product_name": "Malloc disk", 00:06:04.460 "block_size": 512, 00:06:04.460 "num_blocks": 16384, 00:06:04.460 "uuid": "d5fcb634-fa12-457a-952a-bc2496c181c6", 00:06:04.460 "assigned_rate_limits": { 00:06:04.460 "rw_ios_per_sec": 0, 00:06:04.460 "rw_mbytes_per_sec": 0, 00:06:04.460 "r_mbytes_per_sec": 0, 00:06:04.460 "w_mbytes_per_sec": 0 00:06:04.460 }, 00:06:04.460 "claimed": true, 00:06:04.460 "claim_type": "exclusive_write", 00:06:04.460 "zoned": false, 00:06:04.460 "supported_io_types": { 00:06:04.460 "read": true, 00:06:04.460 "write": true, 00:06:04.461 "unmap": true, 00:06:04.461 "flush": true, 00:06:04.461 "reset": true, 00:06:04.461 "nvme_admin": false, 00:06:04.461 "nvme_io": false, 00:06:04.461 "nvme_io_md": false, 00:06:04.461 "write_zeroes": true, 00:06:04.461 "zcopy": true, 00:06:04.461 "get_zone_info": false, 00:06:04.461 "zone_management": false, 00:06:04.461 "zone_append": false, 00:06:04.461 "compare": false, 00:06:04.461 "compare_and_write": false, 00:06:04.461 "abort": true, 00:06:04.461 "seek_hole": false, 00:06:04.461 "seek_data": false, 00:06:04.461 "copy": true, 00:06:04.461 "nvme_iov_md": false 00:06:04.461 }, 00:06:04.461 "memory_domains": [ 00:06:04.461 { 00:06:04.461 "dma_device_id": "system", 00:06:04.461 "dma_device_type": 1 00:06:04.461 }, 00:06:04.461 { 00:06:04.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.461 "dma_device_type": 2 00:06:04.461 } 00:06:04.461 ], 00:06:04.461 "driver_specific": {} 00:06:04.461 }, 00:06:04.461 { 00:06:04.461 "name": "Passthru0", 00:06:04.461 "aliases": [ 00:06:04.461 "5d25955a-b985-54af-8f12-9fd9711c8687" 00:06:04.461 ], 00:06:04.461 "product_name": "passthru", 00:06:04.461 "block_size": 512, 00:06:04.461 "num_blocks": 16384, 00:06:04.461 "uuid": "5d25955a-b985-54af-8f12-9fd9711c8687", 00:06:04.461 "assigned_rate_limits": { 00:06:04.461 "rw_ios_per_sec": 0, 00:06:04.461 "rw_mbytes_per_sec": 0, 00:06:04.461 "r_mbytes_per_sec": 0, 00:06:04.461 "w_mbytes_per_sec": 0 00:06:04.461 }, 00:06:04.461 "claimed": false, 00:06:04.461 "zoned": false, 00:06:04.461 "supported_io_types": { 00:06:04.461 "read": true, 00:06:04.461 "write": true, 00:06:04.461 "unmap": true, 00:06:04.461 "flush": true, 00:06:04.461 "reset": true, 00:06:04.461 "nvme_admin": false, 00:06:04.461 "nvme_io": false, 00:06:04.461 "nvme_io_md": false, 00:06:04.461 "write_zeroes": true, 00:06:04.461 "zcopy": true, 00:06:04.461 "get_zone_info": false, 00:06:04.461 "zone_management": false, 00:06:04.461 "zone_append": false, 00:06:04.461 "compare": false, 00:06:04.461 "compare_and_write": false, 00:06:04.461 "abort": true, 00:06:04.461 "seek_hole": false, 00:06:04.461 "seek_data": false, 00:06:04.461 "copy": true, 00:06:04.461 "nvme_iov_md": false 00:06:04.461 }, 00:06:04.461 "memory_domains": [ 00:06:04.461 { 00:06:04.461 "dma_device_id": "system", 00:06:04.461 "dma_device_type": 1 00:06:04.461 }, 00:06:04.461 { 00:06:04.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.461 "dma_device_type": 2 00:06:04.461 } 00:06:04.461 ], 00:06:04.461 "driver_specific": { 00:06:04.461 "passthru": { 00:06:04.461 "name": "Passthru0", 00:06:04.461 "base_bdev_name": "Malloc0" 00:06:04.461 } 00:06:04.461 } 00:06:04.461 } 00:06:04.461 ]' 00:06:04.461 20:25:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:04.721 20:25:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:04.721 20:25:57 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:04.721 20:25:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.721 20:25:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.721 20:25:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.721 20:25:57 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:04.721 20:25:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.721 20:25:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.721 20:25:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.721 20:25:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:04.721 20:25:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.721 20:25:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.721 20:25:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.721 20:25:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:04.721 20:25:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:04.721 20:25:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:04.721 00:06:04.721 real 0m0.284s 00:06:04.721 user 0m0.178s 00:06:04.721 sys 0m0.039s 00:06:04.721 20:25:57 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.721 20:25:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.721 ************************************ 00:06:04.721 END TEST rpc_integrity 00:06:04.721 ************************************ 00:06:04.721 20:25:58 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:04.721 20:25:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.721 20:25:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.721 20:25:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.721 ************************************ 00:06:04.721 START TEST rpc_plugins 00:06:04.721 ************************************ 00:06:04.721 20:25:58 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:04.721 20:25:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:04.721 20:25:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.721 20:25:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:04.722 20:25:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.722 20:25:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:04.722 20:25:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:04.722 20:25:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.722 20:25:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:04.722 20:25:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.722 20:25:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:04.722 { 00:06:04.722 "name": "Malloc1", 00:06:04.722 "aliases": [ 00:06:04.722 "8629c319-5056-4da1-8a6a-87c7a0e02aa1" 00:06:04.722 ], 00:06:04.722 "product_name": "Malloc disk", 00:06:04.722 "block_size": 4096, 00:06:04.722 "num_blocks": 256, 00:06:04.722 "uuid": "8629c319-5056-4da1-8a6a-87c7a0e02aa1", 00:06:04.722 "assigned_rate_limits": { 00:06:04.722 "rw_ios_per_sec": 0, 00:06:04.722 "rw_mbytes_per_sec": 0, 00:06:04.722 "r_mbytes_per_sec": 0, 00:06:04.722 "w_mbytes_per_sec": 0 00:06:04.722 }, 00:06:04.722 "claimed": false, 00:06:04.722 "zoned": false, 00:06:04.722 "supported_io_types": { 00:06:04.722 "read": true, 00:06:04.722 "write": true, 00:06:04.722 "unmap": true, 00:06:04.722 "flush": true, 00:06:04.722 "reset": true, 00:06:04.722 "nvme_admin": false, 00:06:04.722 "nvme_io": false, 00:06:04.722 "nvme_io_md": false, 00:06:04.722 "write_zeroes": true, 00:06:04.722 "zcopy": true, 00:06:04.722 "get_zone_info": false, 00:06:04.722 "zone_management": false, 00:06:04.722 "zone_append": false, 00:06:04.722 "compare": false, 00:06:04.722 "compare_and_write": false, 00:06:04.722 "abort": true, 00:06:04.722 "seek_hole": false, 00:06:04.722 "seek_data": false, 00:06:04.722 "copy": true, 00:06:04.722 "nvme_iov_md": false 00:06:04.722 }, 00:06:04.722 "memory_domains": [ 00:06:04.722 { 00:06:04.722 "dma_device_id": "system", 00:06:04.722 "dma_device_type": 1 00:06:04.722 }, 00:06:04.722 { 00:06:04.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.722 "dma_device_type": 2 00:06:04.722 } 00:06:04.722 ], 00:06:04.722 "driver_specific": {} 00:06:04.722 } 00:06:04.722 ]' 00:06:04.722 20:25:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:04.722 20:25:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:04.722 20:25:58 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:04.722 20:25:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.722 20:25:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:04.722 20:25:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.722 20:25:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:04.722 20:25:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.722 20:25:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:04.722 20:25:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.722 20:25:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:04.982 20:25:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:04.982 20:25:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:04.982 00:06:04.982 real 0m0.142s 00:06:04.982 user 0m0.086s 00:06:04.982 sys 0m0.019s 00:06:04.982 20:25:58 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.982 20:25:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:04.982 ************************************ 00:06:04.982 END TEST rpc_plugins 00:06:04.982 ************************************ 00:06:04.982 20:25:58 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:04.982 20:25:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.982 20:25:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.982 20:25:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.982 ************************************ 00:06:04.982 START TEST rpc_trace_cmd_test 00:06:04.982 ************************************ 00:06:04.982 20:25:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:04.982 20:25:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:04.982 20:25:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:04.982 20:25:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.982 20:25:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:04.982 20:25:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.982 20:25:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:04.982 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid153381", 00:06:04.982 "tpoint_group_mask": "0x8", 00:06:04.982 "iscsi_conn": { 00:06:04.982 "mask": "0x2", 00:06:04.982 "tpoint_mask": "0x0" 00:06:04.982 }, 00:06:04.982 "scsi": { 00:06:04.982 "mask": "0x4", 00:06:04.982 "tpoint_mask": "0x0" 00:06:04.982 }, 00:06:04.982 "bdev": { 00:06:04.982 "mask": "0x8", 00:06:04.982 "tpoint_mask": "0xffffffffffffffff" 00:06:04.982 }, 00:06:04.982 "nvmf_rdma": { 00:06:04.982 "mask": "0x10", 00:06:04.982 "tpoint_mask": "0x0" 00:06:04.982 }, 00:06:04.982 "nvmf_tcp": { 00:06:04.982 "mask": "0x20", 00:06:04.982 "tpoint_mask": "0x0" 00:06:04.982 }, 00:06:04.982 "ftl": { 00:06:04.982 "mask": "0x40", 00:06:04.982 "tpoint_mask": "0x0" 00:06:04.982 }, 00:06:04.982 "blobfs": { 00:06:04.982 "mask": "0x80", 00:06:04.982 "tpoint_mask": "0x0" 00:06:04.982 }, 00:06:04.982 "dsa": { 00:06:04.982 "mask": "0x200", 00:06:04.982 "tpoint_mask": "0x0" 00:06:04.982 }, 00:06:04.982 "thread": { 00:06:04.982 "mask": "0x400", 00:06:04.982 "tpoint_mask": "0x0" 00:06:04.982 }, 00:06:04.982 "nvme_pcie": { 00:06:04.982 "mask": "0x800", 00:06:04.982 "tpoint_mask": "0x0" 00:06:04.982 }, 00:06:04.982 "iaa": { 00:06:04.982 "mask": "0x1000", 00:06:04.982 "tpoint_mask": "0x0" 00:06:04.982 }, 00:06:04.982 "nvme_tcp": { 00:06:04.982 "mask": "0x2000", 00:06:04.982 "tpoint_mask": "0x0" 00:06:04.982 }, 00:06:04.982 "bdev_nvme": { 00:06:04.982 "mask": "0x4000", 00:06:04.982 "tpoint_mask": "0x0" 00:06:04.982 }, 00:06:04.982 "sock": { 00:06:04.982 "mask": "0x8000", 00:06:04.982 "tpoint_mask": "0x0" 00:06:04.982 }, 00:06:04.982 "blob": { 00:06:04.982 "mask": "0x10000", 00:06:04.982 "tpoint_mask": "0x0" 00:06:04.982 }, 00:06:04.982 "bdev_raid": { 00:06:04.982 "mask": "0x20000", 00:06:04.982 "tpoint_mask": "0x0" 00:06:04.982 }, 00:06:04.982 "scheduler": { 00:06:04.982 "mask": "0x40000", 00:06:04.982 "tpoint_mask": "0x0" 00:06:04.982 } 00:06:04.982 }' 00:06:04.982 20:25:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:04.982 20:25:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:04.982 20:25:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:04.982 20:25:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:04.982 20:25:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:04.982 20:25:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:04.982 20:25:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:05.242 20:25:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:05.242 20:25:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:05.242 20:25:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:05.242 00:06:05.242 real 0m0.232s 00:06:05.242 user 0m0.193s 00:06:05.242 sys 0m0.029s 00:06:05.242 20:25:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.242 20:25:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:05.242 ************************************ 00:06:05.242 END TEST rpc_trace_cmd_test 00:06:05.242 ************************************ 00:06:05.242 20:25:58 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:05.242 20:25:58 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:05.242 20:25:58 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:05.242 20:25:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.242 20:25:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.242 20:25:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.242 ************************************ 00:06:05.242 START TEST rpc_daemon_integrity 00:06:05.242 ************************************ 00:06:05.242 20:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:05.242 20:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:05.242 20:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.242 20:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.242 20:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.242 20:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:05.243 20:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:05.243 20:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:05.243 20:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:05.243 20:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.243 20:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.243 20:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.243 20:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:05.243 20:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:05.243 20:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.243 20:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.243 20:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.243 20:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:05.243 { 00:06:05.243 "name": "Malloc2", 00:06:05.243 "aliases": [ 00:06:05.243 "6faedd01-4641-459e-b6f2-b0be96c91e55" 00:06:05.243 ], 00:06:05.243 "product_name": "Malloc disk", 00:06:05.243 "block_size": 512, 00:06:05.243 "num_blocks": 16384, 00:06:05.243 "uuid": "6faedd01-4641-459e-b6f2-b0be96c91e55", 00:06:05.243 "assigned_rate_limits": { 00:06:05.243 "rw_ios_per_sec": 0, 00:06:05.243 "rw_mbytes_per_sec": 0, 00:06:05.243 "r_mbytes_per_sec": 0, 00:06:05.243 "w_mbytes_per_sec": 0 00:06:05.243 }, 00:06:05.243 "claimed": false, 00:06:05.243 "zoned": false, 00:06:05.243 "supported_io_types": { 00:06:05.243 "read": true, 00:06:05.243 "write": true, 00:06:05.243 "unmap": true, 00:06:05.243 "flush": true, 00:06:05.243 "reset": true, 00:06:05.243 "nvme_admin": false, 00:06:05.243 "nvme_io": false, 00:06:05.243 "nvme_io_md": false, 00:06:05.243 "write_zeroes": true, 00:06:05.243 "zcopy": true, 00:06:05.243 "get_zone_info": false, 00:06:05.243 "zone_management": false, 00:06:05.243 "zone_append": false, 00:06:05.243 "compare": false, 00:06:05.243 "compare_and_write": false, 00:06:05.243 "abort": true, 00:06:05.243 "seek_hole": false, 00:06:05.243 "seek_data": false, 00:06:05.243 "copy": true, 00:06:05.243 "nvme_iov_md": false 00:06:05.243 }, 00:06:05.243 "memory_domains": [ 00:06:05.243 { 00:06:05.243 "dma_device_id": "system", 00:06:05.243 "dma_device_type": 1 00:06:05.243 }, 00:06:05.243 { 00:06:05.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.243 "dma_device_type": 2 00:06:05.243 } 00:06:05.243 ], 00:06:05.243 "driver_specific": {} 00:06:05.243 } 00:06:05.243 ]' 00:06:05.243 20:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:05.503 20:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:05.503 20:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:05.503 20:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.503 20:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.503 [2024-12-05 20:25:58.711394] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:05.503 [2024-12-05 20:25:58.711421] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:05.503 [2024-12-05 20:25:58.711432] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x127eea0 00:06:05.503 [2024-12-05 20:25:58.711438] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:05.503 [2024-12-05 20:25:58.712336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:05.503 [2024-12-05 20:25:58.712354] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:05.503 Passthru0 00:06:05.503 20:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.503 20:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:05.503 20:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.503 20:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.503 20:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.503 20:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:05.503 { 00:06:05.503 "name": "Malloc2", 00:06:05.503 "aliases": [ 00:06:05.503 "6faedd01-4641-459e-b6f2-b0be96c91e55" 00:06:05.503 ], 00:06:05.503 "product_name": "Malloc disk", 00:06:05.503 "block_size": 512, 00:06:05.503 "num_blocks": 16384, 00:06:05.503 "uuid": "6faedd01-4641-459e-b6f2-b0be96c91e55", 00:06:05.503 "assigned_rate_limits": { 00:06:05.503 "rw_ios_per_sec": 0, 00:06:05.503 "rw_mbytes_per_sec": 0, 00:06:05.503 "r_mbytes_per_sec": 0, 00:06:05.503 "w_mbytes_per_sec": 0 00:06:05.503 }, 00:06:05.503 "claimed": true, 00:06:05.503 "claim_type": "exclusive_write", 00:06:05.503 "zoned": false, 00:06:05.503 "supported_io_types": { 00:06:05.503 "read": true, 00:06:05.503 "write": true, 00:06:05.503 "unmap": true, 00:06:05.503 "flush": true, 00:06:05.503 "reset": true, 00:06:05.503 "nvme_admin": false, 00:06:05.503 "nvme_io": false, 00:06:05.503 "nvme_io_md": false, 00:06:05.503 "write_zeroes": true, 00:06:05.503 "zcopy": true, 00:06:05.503 "get_zone_info": false, 00:06:05.503 "zone_management": false, 00:06:05.503 "zone_append": false, 00:06:05.503 "compare": false, 00:06:05.503 "compare_and_write": false, 00:06:05.503 "abort": true, 00:06:05.503 "seek_hole": false, 00:06:05.503 "seek_data": false, 00:06:05.503 "copy": true, 00:06:05.503 "nvme_iov_md": false 00:06:05.503 }, 00:06:05.503 "memory_domains": [ 00:06:05.503 { 00:06:05.503 "dma_device_id": "system", 00:06:05.503 "dma_device_type": 1 00:06:05.503 }, 00:06:05.503 { 00:06:05.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.503 "dma_device_type": 2 00:06:05.503 } 00:06:05.503 ], 00:06:05.503 "driver_specific": {} 00:06:05.503 }, 00:06:05.503 { 00:06:05.503 "name": "Passthru0", 00:06:05.503 "aliases": [ 00:06:05.503 "b692f8a5-7f98-5863-b133-2a74edde2558" 00:06:05.503 ], 00:06:05.503 "product_name": "passthru", 00:06:05.503 "block_size": 512, 00:06:05.503 "num_blocks": 16384, 00:06:05.503 "uuid": "b692f8a5-7f98-5863-b133-2a74edde2558", 00:06:05.503 "assigned_rate_limits": { 00:06:05.503 "rw_ios_per_sec": 0, 00:06:05.503 "rw_mbytes_per_sec": 0, 00:06:05.503 "r_mbytes_per_sec": 0, 00:06:05.503 "w_mbytes_per_sec": 0 00:06:05.503 }, 00:06:05.503 "claimed": false, 00:06:05.503 "zoned": false, 00:06:05.503 "supported_io_types": { 00:06:05.503 "read": true, 00:06:05.504 "write": true, 00:06:05.504 "unmap": true, 00:06:05.504 "flush": true, 00:06:05.504 "reset": true, 00:06:05.504 "nvme_admin": false, 00:06:05.504 "nvme_io": false, 00:06:05.504 "nvme_io_md": false, 00:06:05.504 "write_zeroes": true, 00:06:05.504 "zcopy": true, 00:06:05.504 "get_zone_info": false, 00:06:05.504 "zone_management": false, 00:06:05.504 "zone_append": false, 00:06:05.504 "compare": false, 00:06:05.504 "compare_and_write": false, 00:06:05.504 "abort": true, 00:06:05.504 "seek_hole": false, 00:06:05.504 "seek_data": false, 00:06:05.504 "copy": true, 00:06:05.504 "nvme_iov_md": false 00:06:05.504 }, 00:06:05.504 "memory_domains": [ 00:06:05.504 { 00:06:05.504 "dma_device_id": "system", 00:06:05.504 "dma_device_type": 1 00:06:05.504 }, 00:06:05.504 { 00:06:05.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.504 "dma_device_type": 2 00:06:05.504 } 00:06:05.504 ], 00:06:05.504 "driver_specific": { 00:06:05.504 "passthru": { 00:06:05.504 "name": "Passthru0", 00:06:05.504 "base_bdev_name": "Malloc2" 00:06:05.504 } 00:06:05.504 } 00:06:05.504 } 00:06:05.504 ]' 00:06:05.504 20:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:05.504 20:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:05.504 20:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:05.504 20:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.504 20:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.504 20:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.504 20:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:05.504 20:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.504 20:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.504 20:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.504 20:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:05.504 20:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.504 20:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.504 20:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.504 20:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:05.504 20:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:05.504 20:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:05.504 00:06:05.504 real 0m0.275s 00:06:05.504 user 0m0.169s 00:06:05.504 sys 0m0.036s 00:06:05.504 20:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.504 20:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.504 ************************************ 00:06:05.504 END TEST rpc_daemon_integrity 00:06:05.504 ************************************ 00:06:05.504 20:25:58 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:05.504 20:25:58 rpc -- rpc/rpc.sh@84 -- # killprocess 153381 00:06:05.504 20:25:58 rpc -- common/autotest_common.sh@954 -- # '[' -z 153381 ']' 00:06:05.504 20:25:58 rpc -- common/autotest_common.sh@958 -- # kill -0 153381 00:06:05.504 20:25:58 rpc -- common/autotest_common.sh@959 -- # uname 00:06:05.504 20:25:58 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.504 20:25:58 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 153381 00:06:05.504 20:25:58 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.504 20:25:58 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.504 20:25:58 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 153381' 00:06:05.504 killing process with pid 153381 00:06:05.504 20:25:58 rpc -- common/autotest_common.sh@973 -- # kill 153381 00:06:05.504 20:25:58 rpc -- common/autotest_common.sh@978 -- # wait 153381 00:06:06.074 00:06:06.074 real 0m2.109s 00:06:06.074 user 0m2.669s 00:06:06.074 sys 0m0.711s 00:06:06.074 20:25:59 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.074 20:25:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.074 ************************************ 00:06:06.074 END TEST rpc 00:06:06.074 ************************************ 00:06:06.074 20:25:59 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:06.074 20:25:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.074 20:25:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.074 20:25:59 -- common/autotest_common.sh@10 -- # set +x 00:06:06.074 ************************************ 00:06:06.074 START TEST skip_rpc 00:06:06.075 ************************************ 00:06:06.075 20:25:59 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:06.075 * Looking for test storage... 00:06:06.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:06.075 20:25:59 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:06.075 20:25:59 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:06.075 20:25:59 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:06.075 20:25:59 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:06.075 20:25:59 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.075 20:25:59 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.075 20:25:59 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.075 20:25:59 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.075 20:25:59 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.075 20:25:59 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.075 20:25:59 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.075 20:25:59 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.075 20:25:59 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.075 20:25:59 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.075 20:25:59 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.075 20:25:59 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:06.075 20:25:59 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:06.075 20:25:59 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.075 20:25:59 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.075 20:25:59 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:06.075 20:25:59 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:06.075 20:25:59 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.075 20:25:59 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:06.075 20:25:59 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.075 20:25:59 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:06.075 20:25:59 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:06.075 20:25:59 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.075 20:25:59 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:06.075 20:25:59 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.075 20:25:59 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.075 20:25:59 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.075 20:25:59 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:06.075 20:25:59 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.075 20:25:59 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:06.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.075 --rc genhtml_branch_coverage=1 00:06:06.075 --rc genhtml_function_coverage=1 00:06:06.075 --rc genhtml_legend=1 00:06:06.075 --rc geninfo_all_blocks=1 00:06:06.075 --rc geninfo_unexecuted_blocks=1 00:06:06.075 00:06:06.075 ' 00:06:06.075 20:25:59 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:06.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.075 --rc genhtml_branch_coverage=1 00:06:06.075 --rc genhtml_function_coverage=1 00:06:06.075 --rc genhtml_legend=1 00:06:06.075 --rc geninfo_all_blocks=1 00:06:06.075 --rc geninfo_unexecuted_blocks=1 00:06:06.075 00:06:06.075 ' 00:06:06.075 20:25:59 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:06.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.075 --rc genhtml_branch_coverage=1 00:06:06.075 --rc genhtml_function_coverage=1 00:06:06.075 --rc genhtml_legend=1 00:06:06.075 --rc geninfo_all_blocks=1 00:06:06.075 --rc geninfo_unexecuted_blocks=1 00:06:06.075 00:06:06.075 ' 00:06:06.075 20:25:59 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:06.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.075 --rc genhtml_branch_coverage=1 00:06:06.075 --rc genhtml_function_coverage=1 00:06:06.075 --rc genhtml_legend=1 00:06:06.075 --rc geninfo_all_blocks=1 00:06:06.075 --rc geninfo_unexecuted_blocks=1 00:06:06.075 00:06:06.075 ' 00:06:06.075 20:25:59 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:06.075 20:25:59 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:06.075 20:25:59 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:06.075 20:25:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.075 20:25:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.075 20:25:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.075 ************************************ 00:06:06.075 START TEST skip_rpc 00:06:06.075 ************************************ 00:06:06.075 20:25:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:06.075 20:25:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=154078 00:06:06.075 20:25:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:06.075 20:25:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:06.075 20:25:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:06.335 [2024-12-05 20:25:59.564322] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:06:06.335 [2024-12-05 20:25:59.564361] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154078 ] 00:06:06.335 [2024-12-05 20:25:59.635427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.335 [2024-12-05 20:25:59.672637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.615 20:26:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:11.615 20:26:04 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:11.615 20:26:04 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:11.615 20:26:04 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:11.615 20:26:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.615 20:26:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:11.615 20:26:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.615 20:26:04 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:11.615 20:26:04 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.615 20:26:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.615 20:26:04 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:11.615 20:26:04 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:11.615 20:26:04 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:11.615 20:26:04 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:11.615 20:26:04 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:11.615 20:26:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:11.615 20:26:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 154078 00:06:11.615 20:26:04 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 154078 ']' 00:06:11.615 20:26:04 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 154078 00:06:11.615 20:26:04 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:11.615 20:26:04 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.615 20:26:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 154078 00:06:11.615 20:26:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.615 20:26:04 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.615 20:26:04 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 154078' 00:06:11.615 killing process with pid 154078 00:06:11.615 20:26:04 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 154078 00:06:11.615 20:26:04 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 154078 00:06:11.615 00:06:11.615 real 0m5.367s 00:06:11.615 user 0m5.117s 00:06:11.615 sys 0m0.287s 00:06:11.615 20:26:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.615 20:26:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.615 ************************************ 00:06:11.615 END TEST skip_rpc 00:06:11.615 ************************************ 00:06:11.615 20:26:04 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:11.615 20:26:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.615 20:26:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.615 20:26:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.615 ************************************ 00:06:11.615 START TEST skip_rpc_with_json 00:06:11.615 ************************************ 00:06:11.615 20:26:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:11.615 20:26:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:11.615 20:26:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=155018 00:06:11.615 20:26:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.615 20:26:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.615 20:26:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 155018 00:06:11.615 20:26:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 155018 ']' 00:06:11.615 20:26:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.615 20:26:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.615 20:26:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.615 20:26:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.615 20:26:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:11.615 [2024-12-05 20:26:05.006405] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:06:11.615 [2024-12-05 20:26:05.006450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155018 ] 00:06:11.877 [2024-12-05 20:26:05.079365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.877 [2024-12-05 20:26:05.116321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.450 20:26:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.450 20:26:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:12.450 20:26:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:12.450 20:26:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.450 20:26:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:12.450 [2024-12-05 20:26:05.817448] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:12.450 request: 00:06:12.450 { 00:06:12.450 "trtype": "tcp", 00:06:12.450 "method": "nvmf_get_transports", 00:06:12.450 "req_id": 1 00:06:12.450 } 00:06:12.450 Got JSON-RPC error response 00:06:12.450 response: 00:06:12.450 { 00:06:12.450 "code": -19, 00:06:12.450 "message": "No such device" 00:06:12.450 } 00:06:12.450 20:26:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:12.450 20:26:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:12.450 20:26:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.450 20:26:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:12.450 [2024-12-05 20:26:05.829545] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:12.450 20:26:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.450 20:26:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:12.450 20:26:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.450 20:26:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:12.710 20:26:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.710 20:26:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:12.710 { 00:06:12.710 "subsystems": [ 00:06:12.710 { 00:06:12.710 "subsystem": "fsdev", 00:06:12.710 "config": [ 00:06:12.710 { 00:06:12.710 "method": "fsdev_set_opts", 00:06:12.710 "params": { 00:06:12.710 "fsdev_io_pool_size": 65535, 00:06:12.710 "fsdev_io_cache_size": 256 00:06:12.710 } 00:06:12.710 } 00:06:12.710 ] 00:06:12.710 }, 00:06:12.710 { 00:06:12.710 "subsystem": "vfio_user_target", 00:06:12.710 "config": null 00:06:12.710 }, 00:06:12.710 { 00:06:12.710 "subsystem": "keyring", 00:06:12.710 "config": [] 00:06:12.710 }, 00:06:12.710 { 00:06:12.710 "subsystem": "iobuf", 00:06:12.710 "config": [ 00:06:12.710 { 00:06:12.710 "method": "iobuf_set_options", 00:06:12.710 "params": { 00:06:12.710 "small_pool_count": 8192, 00:06:12.710 "large_pool_count": 1024, 00:06:12.710 "small_bufsize": 8192, 00:06:12.710 "large_bufsize": 135168, 00:06:12.710 "enable_numa": false 00:06:12.710 } 00:06:12.710 } 00:06:12.710 ] 00:06:12.710 }, 00:06:12.710 { 00:06:12.710 "subsystem": "sock", 00:06:12.710 "config": [ 00:06:12.710 { 00:06:12.710 "method": "sock_set_default_impl", 00:06:12.710 "params": { 00:06:12.710 "impl_name": "posix" 00:06:12.710 } 00:06:12.710 }, 00:06:12.710 { 00:06:12.710 "method": "sock_impl_set_options", 00:06:12.710 "params": { 00:06:12.710 "impl_name": "ssl", 00:06:12.710 "recv_buf_size": 4096, 00:06:12.710 "send_buf_size": 4096, 00:06:12.710 "enable_recv_pipe": true, 00:06:12.710 "enable_quickack": false, 00:06:12.710 "enable_placement_id": 0, 00:06:12.710 "enable_zerocopy_send_server": true, 00:06:12.710 "enable_zerocopy_send_client": false, 00:06:12.710 "zerocopy_threshold": 0, 00:06:12.710 "tls_version": 0, 00:06:12.710 "enable_ktls": false 00:06:12.710 } 00:06:12.710 }, 00:06:12.710 { 00:06:12.710 "method": "sock_impl_set_options", 00:06:12.710 "params": { 00:06:12.710 "impl_name": "posix", 00:06:12.710 "recv_buf_size": 2097152, 00:06:12.710 "send_buf_size": 2097152, 00:06:12.710 "enable_recv_pipe": true, 00:06:12.710 "enable_quickack": false, 00:06:12.710 "enable_placement_id": 0, 00:06:12.710 "enable_zerocopy_send_server": true, 00:06:12.710 "enable_zerocopy_send_client": false, 00:06:12.710 "zerocopy_threshold": 0, 00:06:12.710 "tls_version": 0, 00:06:12.710 "enable_ktls": false 00:06:12.710 } 00:06:12.710 } 00:06:12.710 ] 00:06:12.710 }, 00:06:12.710 { 00:06:12.710 "subsystem": "vmd", 00:06:12.710 "config": [] 00:06:12.710 }, 00:06:12.710 { 00:06:12.710 "subsystem": "accel", 00:06:12.710 "config": [ 00:06:12.710 { 00:06:12.710 "method": "accel_set_options", 00:06:12.710 "params": { 00:06:12.710 "small_cache_size": 128, 00:06:12.710 "large_cache_size": 16, 00:06:12.710 "task_count": 2048, 00:06:12.710 "sequence_count": 2048, 00:06:12.710 "buf_count": 2048 00:06:12.710 } 00:06:12.710 } 00:06:12.710 ] 00:06:12.710 }, 00:06:12.710 { 00:06:12.710 "subsystem": "bdev", 00:06:12.710 "config": [ 00:06:12.710 { 00:06:12.710 "method": "bdev_set_options", 00:06:12.710 "params": { 00:06:12.710 "bdev_io_pool_size": 65535, 00:06:12.710 "bdev_io_cache_size": 256, 00:06:12.710 "bdev_auto_examine": true, 00:06:12.710 "iobuf_small_cache_size": 128, 00:06:12.710 "iobuf_large_cache_size": 16 00:06:12.710 } 00:06:12.710 }, 00:06:12.710 { 00:06:12.710 "method": "bdev_raid_set_options", 00:06:12.710 "params": { 00:06:12.710 "process_window_size_kb": 1024, 00:06:12.710 "process_max_bandwidth_mb_sec": 0 00:06:12.710 } 00:06:12.710 }, 00:06:12.710 { 00:06:12.710 "method": "bdev_iscsi_set_options", 00:06:12.710 "params": { 00:06:12.710 "timeout_sec": 30 00:06:12.710 } 00:06:12.710 }, 00:06:12.710 { 00:06:12.710 "method": "bdev_nvme_set_options", 00:06:12.710 "params": { 00:06:12.710 "action_on_timeout": "none", 00:06:12.710 "timeout_us": 0, 00:06:12.710 "timeout_admin_us": 0, 00:06:12.710 "keep_alive_timeout_ms": 10000, 00:06:12.710 "arbitration_burst": 0, 00:06:12.710 "low_priority_weight": 0, 00:06:12.710 "medium_priority_weight": 0, 00:06:12.710 "high_priority_weight": 0, 00:06:12.710 "nvme_adminq_poll_period_us": 10000, 00:06:12.710 "nvme_ioq_poll_period_us": 0, 00:06:12.710 "io_queue_requests": 0, 00:06:12.710 "delay_cmd_submit": true, 00:06:12.710 "transport_retry_count": 4, 00:06:12.710 "bdev_retry_count": 3, 00:06:12.710 "transport_ack_timeout": 0, 00:06:12.710 "ctrlr_loss_timeout_sec": 0, 00:06:12.711 "reconnect_delay_sec": 0, 00:06:12.711 "fast_io_fail_timeout_sec": 0, 00:06:12.711 "disable_auto_failback": false, 00:06:12.711 "generate_uuids": false, 00:06:12.711 "transport_tos": 0, 00:06:12.711 "nvme_error_stat": false, 00:06:12.711 "rdma_srq_size": 0, 00:06:12.711 "io_path_stat": false, 00:06:12.711 "allow_accel_sequence": false, 00:06:12.711 "rdma_max_cq_size": 0, 00:06:12.711 "rdma_cm_event_timeout_ms": 0, 00:06:12.711 "dhchap_digests": [ 00:06:12.711 "sha256", 00:06:12.711 "sha384", 00:06:12.711 "sha512" 00:06:12.711 ], 00:06:12.711 "dhchap_dhgroups": [ 00:06:12.711 "null", 00:06:12.711 "ffdhe2048", 00:06:12.711 "ffdhe3072", 00:06:12.711 "ffdhe4096", 00:06:12.711 "ffdhe6144", 00:06:12.711 "ffdhe8192" 00:06:12.711 ] 00:06:12.711 } 00:06:12.711 }, 00:06:12.711 { 00:06:12.711 "method": "bdev_nvme_set_hotplug", 00:06:12.711 "params": { 00:06:12.711 "period_us": 100000, 00:06:12.711 "enable": false 00:06:12.711 } 00:06:12.711 }, 00:06:12.711 { 00:06:12.711 "method": "bdev_wait_for_examine" 00:06:12.711 } 00:06:12.711 ] 00:06:12.711 }, 00:06:12.711 { 00:06:12.711 "subsystem": "scsi", 00:06:12.711 "config": null 00:06:12.711 }, 00:06:12.711 { 00:06:12.711 "subsystem": "scheduler", 00:06:12.711 "config": [ 00:06:12.711 { 00:06:12.711 "method": "framework_set_scheduler", 00:06:12.711 "params": { 00:06:12.711 "name": "static" 00:06:12.711 } 00:06:12.711 } 00:06:12.711 ] 00:06:12.711 }, 00:06:12.711 { 00:06:12.711 "subsystem": "vhost_scsi", 00:06:12.711 "config": [] 00:06:12.711 }, 00:06:12.711 { 00:06:12.711 "subsystem": "vhost_blk", 00:06:12.711 "config": [] 00:06:12.711 }, 00:06:12.711 { 00:06:12.711 "subsystem": "ublk", 00:06:12.711 "config": [] 00:06:12.711 }, 00:06:12.711 { 00:06:12.711 "subsystem": "nbd", 00:06:12.711 "config": [] 00:06:12.711 }, 00:06:12.711 { 00:06:12.711 "subsystem": "nvmf", 00:06:12.711 "config": [ 00:06:12.711 { 00:06:12.711 "method": "nvmf_set_config", 00:06:12.711 "params": { 00:06:12.711 "discovery_filter": "match_any", 00:06:12.711 "admin_cmd_passthru": { 00:06:12.711 "identify_ctrlr": false 00:06:12.711 }, 00:06:12.711 "dhchap_digests": [ 00:06:12.711 "sha256", 00:06:12.711 "sha384", 00:06:12.711 "sha512" 00:06:12.711 ], 00:06:12.711 "dhchap_dhgroups": [ 00:06:12.711 "null", 00:06:12.711 "ffdhe2048", 00:06:12.711 "ffdhe3072", 00:06:12.711 "ffdhe4096", 00:06:12.711 "ffdhe6144", 00:06:12.711 "ffdhe8192" 00:06:12.711 ] 00:06:12.711 } 00:06:12.711 }, 00:06:12.711 { 00:06:12.711 "method": "nvmf_set_max_subsystems", 00:06:12.711 "params": { 00:06:12.711 "max_subsystems": 1024 00:06:12.711 } 00:06:12.711 }, 00:06:12.711 { 00:06:12.711 "method": "nvmf_set_crdt", 00:06:12.711 "params": { 00:06:12.711 "crdt1": 0, 00:06:12.711 "crdt2": 0, 00:06:12.711 "crdt3": 0 00:06:12.711 } 00:06:12.711 }, 00:06:12.711 { 00:06:12.711 "method": "nvmf_create_transport", 00:06:12.711 "params": { 00:06:12.711 "trtype": "TCP", 00:06:12.711 "max_queue_depth": 128, 00:06:12.711 "max_io_qpairs_per_ctrlr": 127, 00:06:12.711 "in_capsule_data_size": 4096, 00:06:12.711 "max_io_size": 131072, 00:06:12.711 "io_unit_size": 131072, 00:06:12.711 "max_aq_depth": 128, 00:06:12.711 "num_shared_buffers": 511, 00:06:12.711 "buf_cache_size": 4294967295, 00:06:12.711 "dif_insert_or_strip": false, 00:06:12.711 "zcopy": false, 00:06:12.711 "c2h_success": true, 00:06:12.711 "sock_priority": 0, 00:06:12.711 "abort_timeout_sec": 1, 00:06:12.711 "ack_timeout": 0, 00:06:12.711 "data_wr_pool_size": 0 00:06:12.711 } 00:06:12.711 } 00:06:12.711 ] 00:06:12.711 }, 00:06:12.711 { 00:06:12.711 "subsystem": "iscsi", 00:06:12.711 "config": [ 00:06:12.711 { 00:06:12.711 "method": "iscsi_set_options", 00:06:12.711 "params": { 00:06:12.711 "node_base": "iqn.2016-06.io.spdk", 00:06:12.711 "max_sessions": 128, 00:06:12.711 "max_connections_per_session": 2, 00:06:12.711 "max_queue_depth": 64, 00:06:12.711 "default_time2wait": 2, 00:06:12.711 "default_time2retain": 20, 00:06:12.711 "first_burst_length": 8192, 00:06:12.711 "immediate_data": true, 00:06:12.711 "allow_duplicated_isid": false, 00:06:12.711 "error_recovery_level": 0, 00:06:12.711 "nop_timeout": 60, 00:06:12.711 "nop_in_interval": 30, 00:06:12.711 "disable_chap": false, 00:06:12.711 "require_chap": false, 00:06:12.711 "mutual_chap": false, 00:06:12.711 "chap_group": 0, 00:06:12.711 "max_large_datain_per_connection": 64, 00:06:12.711 "max_r2t_per_connection": 4, 00:06:12.711 "pdu_pool_size": 36864, 00:06:12.711 "immediate_data_pool_size": 16384, 00:06:12.711 "data_out_pool_size": 2048 00:06:12.711 } 00:06:12.711 } 00:06:12.711 ] 00:06:12.711 } 00:06:12.711 ] 00:06:12.711 } 00:06:12.711 20:26:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:12.711 20:26:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 155018 00:06:12.711 20:26:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 155018 ']' 00:06:12.711 20:26:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 155018 00:06:12.711 20:26:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:12.711 20:26:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.711 20:26:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 155018 00:06:12.711 20:26:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.711 20:26:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.711 20:26:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 155018' 00:06:12.711 killing process with pid 155018 00:06:12.711 20:26:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 155018 00:06:12.711 20:26:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 155018 00:06:12.970 20:26:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=155270 00:06:12.970 20:26:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:12.970 20:26:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:18.268 20:26:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 155270 00:06:18.268 20:26:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 155270 ']' 00:06:18.268 20:26:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 155270 00:06:18.268 20:26:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:18.268 20:26:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.268 20:26:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 155270 00:06:18.268 20:26:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.268 20:26:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.268 20:26:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 155270' 00:06:18.268 killing process with pid 155270 00:06:18.268 20:26:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 155270 00:06:18.268 20:26:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 155270 00:06:18.527 20:26:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:18.527 20:26:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:18.527 00:06:18.527 real 0m6.765s 00:06:18.527 user 0m6.556s 00:06:18.527 sys 0m0.664s 00:06:18.527 20:26:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.527 20:26:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:18.527 ************************************ 00:06:18.527 END TEST skip_rpc_with_json 00:06:18.527 ************************************ 00:06:18.527 20:26:11 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:18.527 20:26:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.527 20:26:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.527 20:26:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.527 ************************************ 00:06:18.527 START TEST skip_rpc_with_delay 00:06:18.527 ************************************ 00:06:18.527 20:26:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:18.527 20:26:11 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:18.527 20:26:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:18.527 20:26:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:18.527 20:26:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:18.527 20:26:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.527 20:26:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:18.527 20:26:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.527 20:26:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:18.527 20:26:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.527 20:26:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:18.527 20:26:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:18.527 20:26:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:18.527 [2024-12-05 20:26:11.843266] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:18.527 20:26:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:18.527 20:26:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:18.527 20:26:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:18.527 20:26:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:18.527 00:06:18.527 real 0m0.068s 00:06:18.527 user 0m0.047s 00:06:18.527 sys 0m0.021s 00:06:18.527 20:26:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.527 20:26:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:18.527 ************************************ 00:06:18.527 END TEST skip_rpc_with_delay 00:06:18.527 ************************************ 00:06:18.527 20:26:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:18.527 20:26:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:18.527 20:26:11 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:18.527 20:26:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.527 20:26:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.527 20:26:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.527 ************************************ 00:06:18.527 START TEST exit_on_failed_rpc_init 00:06:18.527 ************************************ 00:06:18.527 20:26:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:18.527 20:26:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=156286 00:06:18.527 20:26:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 156286 00:06:18.527 20:26:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.527 20:26:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 156286 ']' 00:06:18.527 20:26:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.527 20:26:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.528 20:26:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.528 20:26:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.528 20:26:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:18.787 [2024-12-05 20:26:11.981909] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:06:18.787 [2024-12-05 20:26:11.981950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156286 ] 00:06:18.787 [2024-12-05 20:26:12.055886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.787 [2024-12-05 20:26:12.096254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.357 20:26:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.357 20:26:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:19.357 20:26:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.357 20:26:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:19.357 20:26:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:19.357 20:26:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:19.357 20:26:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.357 20:26:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.357 20:26:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.357 20:26:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.357 20:26:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.357 20:26:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.357 20:26:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.357 20:26:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:19.357 20:26:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:19.616 [2024-12-05 20:26:12.845843] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:06:19.616 [2024-12-05 20:26:12.845882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156549 ] 00:06:19.616 [2024-12-05 20:26:12.918261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.616 [2024-12-05 20:26:12.956531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.616 [2024-12-05 20:26:12.956585] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:19.616 [2024-12-05 20:26:12.956594] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:19.616 [2024-12-05 20:26:12.956600] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:19.616 20:26:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:19.616 20:26:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:19.616 20:26:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:19.616 20:26:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:19.616 20:26:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:19.616 20:26:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:19.616 20:26:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:19.616 20:26:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 156286 00:06:19.616 20:26:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 156286 ']' 00:06:19.616 20:26:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 156286 00:06:19.616 20:26:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:19.616 20:26:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.616 20:26:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 156286 00:06:19.616 20:26:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.616 20:26:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.616 20:26:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 156286' 00:06:19.616 killing process with pid 156286 00:06:19.616 20:26:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 156286 00:06:19.876 20:26:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 156286 00:06:20.136 00:06:20.136 real 0m1.422s 00:06:20.136 user 0m1.612s 00:06:20.136 sys 0m0.409s 00:06:20.136 20:26:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.136 20:26:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:20.136 ************************************ 00:06:20.136 END TEST exit_on_failed_rpc_init 00:06:20.136 ************************************ 00:06:20.136 20:26:13 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:20.136 00:06:20.136 real 0m14.084s 00:06:20.136 user 0m13.543s 00:06:20.136 sys 0m1.664s 00:06:20.136 20:26:13 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.136 20:26:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.136 ************************************ 00:06:20.136 END TEST skip_rpc 00:06:20.136 ************************************ 00:06:20.136 20:26:13 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:20.136 20:26:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.136 20:26:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.136 20:26:13 -- common/autotest_common.sh@10 -- # set +x 00:06:20.136 ************************************ 00:06:20.136 START TEST rpc_client 00:06:20.136 ************************************ 00:06:20.136 20:26:13 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:20.136 * Looking for test storage... 00:06:20.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:20.136 20:26:13 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:20.136 20:26:13 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:06:20.136 20:26:13 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:20.396 20:26:13 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:20.396 20:26:13 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.396 20:26:13 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.396 20:26:13 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.396 20:26:13 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.396 20:26:13 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.396 20:26:13 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.396 20:26:13 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.396 20:26:13 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.396 20:26:13 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.396 20:26:13 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.396 20:26:13 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.396 20:26:13 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:20.396 20:26:13 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:20.396 20:26:13 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.396 20:26:13 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.396 20:26:13 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:20.396 20:26:13 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:20.396 20:26:13 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.396 20:26:13 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:20.396 20:26:13 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.396 20:26:13 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:20.396 20:26:13 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:20.396 20:26:13 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.396 20:26:13 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:20.396 20:26:13 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.396 20:26:13 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.396 20:26:13 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.396 20:26:13 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:20.396 20:26:13 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.396 20:26:13 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:20.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.396 --rc genhtml_branch_coverage=1 00:06:20.396 --rc genhtml_function_coverage=1 00:06:20.396 --rc genhtml_legend=1 00:06:20.396 --rc geninfo_all_blocks=1 00:06:20.396 --rc geninfo_unexecuted_blocks=1 00:06:20.396 00:06:20.396 ' 00:06:20.396 20:26:13 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:20.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.396 --rc genhtml_branch_coverage=1 00:06:20.396 --rc genhtml_function_coverage=1 00:06:20.396 --rc genhtml_legend=1 00:06:20.396 --rc geninfo_all_blocks=1 00:06:20.396 --rc geninfo_unexecuted_blocks=1 00:06:20.396 00:06:20.396 ' 00:06:20.396 20:26:13 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:20.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.396 --rc genhtml_branch_coverage=1 00:06:20.396 --rc genhtml_function_coverage=1 00:06:20.396 --rc genhtml_legend=1 00:06:20.396 --rc geninfo_all_blocks=1 00:06:20.396 --rc geninfo_unexecuted_blocks=1 00:06:20.396 00:06:20.396 ' 00:06:20.396 20:26:13 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:20.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.396 --rc genhtml_branch_coverage=1 00:06:20.396 --rc genhtml_function_coverage=1 00:06:20.396 --rc genhtml_legend=1 00:06:20.396 --rc geninfo_all_blocks=1 00:06:20.396 --rc geninfo_unexecuted_blocks=1 00:06:20.396 00:06:20.396 ' 00:06:20.396 20:26:13 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:20.396 OK 00:06:20.396 20:26:13 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:20.396 00:06:20.396 real 0m0.199s 00:06:20.396 user 0m0.121s 00:06:20.396 sys 0m0.092s 00:06:20.396 20:26:13 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.396 20:26:13 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:20.396 ************************************ 00:06:20.396 END TEST rpc_client 00:06:20.396 ************************************ 00:06:20.396 20:26:13 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:20.396 20:26:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.396 20:26:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.396 20:26:13 -- common/autotest_common.sh@10 -- # set +x 00:06:20.396 ************************************ 00:06:20.396 START TEST json_config 00:06:20.396 ************************************ 00:06:20.396 20:26:13 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:20.396 20:26:13 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:20.396 20:26:13 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:06:20.396 20:26:13 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:20.657 20:26:13 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:20.657 20:26:13 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.657 20:26:13 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.657 20:26:13 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.657 20:26:13 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.657 20:26:13 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.657 20:26:13 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.657 20:26:13 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.657 20:26:13 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.657 20:26:13 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.657 20:26:13 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.657 20:26:13 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.657 20:26:13 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:20.657 20:26:13 json_config -- scripts/common.sh@345 -- # : 1 00:06:20.657 20:26:13 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.657 20:26:13 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.657 20:26:13 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:20.657 20:26:13 json_config -- scripts/common.sh@353 -- # local d=1 00:06:20.657 20:26:13 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.657 20:26:13 json_config -- scripts/common.sh@355 -- # echo 1 00:06:20.657 20:26:13 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.657 20:26:13 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:20.657 20:26:13 json_config -- scripts/common.sh@353 -- # local d=2 00:06:20.657 20:26:13 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.657 20:26:13 json_config -- scripts/common.sh@355 -- # echo 2 00:06:20.657 20:26:13 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.657 20:26:13 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.657 20:26:13 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.657 20:26:13 json_config -- scripts/common.sh@368 -- # return 0 00:06:20.657 20:26:13 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.657 20:26:13 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:20.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.657 --rc genhtml_branch_coverage=1 00:06:20.657 --rc genhtml_function_coverage=1 00:06:20.657 --rc genhtml_legend=1 00:06:20.657 --rc geninfo_all_blocks=1 00:06:20.657 --rc geninfo_unexecuted_blocks=1 00:06:20.657 00:06:20.657 ' 00:06:20.657 20:26:13 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:20.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.657 --rc genhtml_branch_coverage=1 00:06:20.657 --rc genhtml_function_coverage=1 00:06:20.657 --rc genhtml_legend=1 00:06:20.657 --rc geninfo_all_blocks=1 00:06:20.657 --rc geninfo_unexecuted_blocks=1 00:06:20.657 00:06:20.657 ' 00:06:20.657 20:26:13 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:20.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.657 --rc genhtml_branch_coverage=1 00:06:20.657 --rc genhtml_function_coverage=1 00:06:20.657 --rc genhtml_legend=1 00:06:20.657 --rc geninfo_all_blocks=1 00:06:20.657 --rc geninfo_unexecuted_blocks=1 00:06:20.657 00:06:20.657 ' 00:06:20.657 20:26:13 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:20.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.657 --rc genhtml_branch_coverage=1 00:06:20.657 --rc genhtml_function_coverage=1 00:06:20.657 --rc genhtml_legend=1 00:06:20.657 --rc geninfo_all_blocks=1 00:06:20.657 --rc geninfo_unexecuted_blocks=1 00:06:20.657 00:06:20.657 ' 00:06:20.657 20:26:13 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.657 20:26:13 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:20.657 20:26:13 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.657 20:26:13 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.657 20:26:13 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.657 20:26:13 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.657 20:26:13 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.657 20:26:13 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.657 20:26:13 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.657 20:26:13 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.657 20:26:13 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.657 20:26:13 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.658 20:26:13 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:06:20.658 20:26:13 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:06:20.658 20:26:13 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.658 20:26:13 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.658 20:26:13 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:20.658 20:26:13 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.658 20:26:13 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:20.658 20:26:13 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:20.658 20:26:13 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.658 20:26:13 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.658 20:26:13 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.658 20:26:13 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.658 20:26:13 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.658 20:26:13 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.658 20:26:13 json_config -- paths/export.sh@5 -- # export PATH 00:06:20.658 20:26:13 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.658 20:26:13 json_config -- nvmf/common.sh@51 -- # : 0 00:06:20.658 20:26:13 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:20.658 20:26:13 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:20.658 20:26:13 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.658 20:26:13 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.658 20:26:13 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.658 20:26:13 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:20.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:20.658 20:26:13 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:20.658 20:26:13 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:20.658 20:26:13 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:20.658 20:26:13 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:20.658 20:26:13 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:20.658 20:26:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:20.658 20:26:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:20.658 20:26:13 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:20.658 20:26:13 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:20.658 20:26:13 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:20.658 20:26:13 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:20.658 20:26:13 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:20.658 20:26:13 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:20.658 20:26:13 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:20.658 20:26:13 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:20.658 20:26:13 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:20.658 20:26:13 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:20.658 20:26:13 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:20.658 20:26:13 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:20.658 INFO: JSON configuration test init 00:06:20.658 20:26:13 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:20.658 20:26:13 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:20.658 20:26:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:20.658 20:26:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.658 20:26:13 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:20.658 20:26:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:20.658 20:26:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.658 20:26:13 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:20.658 20:26:13 json_config -- json_config/common.sh@9 -- # local app=target 00:06:20.658 20:26:13 json_config -- json_config/common.sh@10 -- # shift 00:06:20.658 20:26:13 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:20.658 20:26:13 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:20.658 20:26:13 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:20.658 20:26:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:20.658 20:26:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:20.658 20:26:13 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=156933 00:06:20.658 20:26:13 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:20.658 Waiting for target to run... 00:06:20.658 20:26:13 json_config -- json_config/common.sh@25 -- # waitforlisten 156933 /var/tmp/spdk_tgt.sock 00:06:20.658 20:26:13 json_config -- common/autotest_common.sh@835 -- # '[' -z 156933 ']' 00:06:20.658 20:26:13 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:20.658 20:26:13 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:20.658 20:26:13 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.658 20:26:13 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:20.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:20.658 20:26:13 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.658 20:26:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.658 [2024-12-05 20:26:13.965472] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:06:20.658 [2024-12-05 20:26:13.965517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156933 ] 00:06:20.918 [2024-12-05 20:26:14.245478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.918 [2024-12-05 20:26:14.276761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.488 20:26:14 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.488 20:26:14 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:21.488 20:26:14 json_config -- json_config/common.sh@26 -- # echo '' 00:06:21.488 00:06:21.488 20:26:14 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:21.488 20:26:14 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:21.488 20:26:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:21.488 20:26:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.488 20:26:14 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:21.488 20:26:14 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:21.488 20:26:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:21.488 20:26:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.488 20:26:14 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:21.488 20:26:14 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:21.488 20:26:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:24.793 20:26:17 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:24.793 20:26:17 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:24.793 20:26:17 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:24.793 20:26:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.793 20:26:17 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:24.793 20:26:17 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:24.793 20:26:17 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:24.793 20:26:17 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:24.793 20:26:17 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:24.794 20:26:17 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:24.794 20:26:17 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:24.794 20:26:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:24.794 20:26:18 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:24.794 20:26:18 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:24.794 20:26:18 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:24.794 20:26:18 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:24.794 20:26:18 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:24.794 20:26:18 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:24.794 20:26:18 json_config -- json_config/json_config.sh@54 -- # sort 00:06:24.794 20:26:18 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:24.794 20:26:18 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:24.794 20:26:18 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:24.794 20:26:18 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:24.794 20:26:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.794 20:26:18 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:24.794 20:26:18 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:24.794 20:26:18 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:24.794 20:26:18 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:24.794 20:26:18 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:24.794 20:26:18 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:24.794 20:26:18 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:24.794 20:26:18 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:24.794 20:26:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.794 20:26:18 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:24.794 20:26:18 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:24.794 20:26:18 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:24.794 20:26:18 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:24.794 20:26:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:25.054 MallocForNvmf0 00:06:25.054 20:26:18 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:25.054 20:26:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:25.054 MallocForNvmf1 00:06:25.054 20:26:18 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:25.054 20:26:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:25.313 [2024-12-05 20:26:18.624489] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:25.313 20:26:18 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:25.313 20:26:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:25.573 20:26:18 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:25.573 20:26:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:25.573 20:26:18 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:25.573 20:26:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:25.832 20:26:19 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:25.832 20:26:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:26.092 [2024-12-05 20:26:19.278547] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:26.092 20:26:19 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:26.092 20:26:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:26.092 20:26:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.092 20:26:19 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:26.092 20:26:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:26.092 20:26:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.092 20:26:19 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:26.092 20:26:19 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:26.092 20:26:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:26.092 MallocBdevForConfigChangeCheck 00:06:26.352 20:26:19 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:26.352 20:26:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:26.352 20:26:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.352 20:26:19 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:26.352 20:26:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:26.612 20:26:19 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:26.612 INFO: shutting down applications... 00:06:26.612 20:26:19 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:26.612 20:26:19 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:26.612 20:26:19 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:26.612 20:26:19 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:28.521 Calling clear_iscsi_subsystem 00:06:28.521 Calling clear_nvmf_subsystem 00:06:28.521 Calling clear_nbd_subsystem 00:06:28.521 Calling clear_ublk_subsystem 00:06:28.521 Calling clear_vhost_blk_subsystem 00:06:28.521 Calling clear_vhost_scsi_subsystem 00:06:28.521 Calling clear_bdev_subsystem 00:06:28.521 20:26:21 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:28.521 20:26:21 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:28.522 20:26:21 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:28.522 20:26:21 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:28.522 20:26:21 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:28.522 20:26:21 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:28.522 20:26:21 json_config -- json_config/json_config.sh@352 -- # break 00:06:28.522 20:26:21 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:28.522 20:26:21 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:28.522 20:26:21 json_config -- json_config/common.sh@31 -- # local app=target 00:06:28.522 20:26:21 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:28.522 20:26:21 json_config -- json_config/common.sh@35 -- # [[ -n 156933 ]] 00:06:28.522 20:26:21 json_config -- json_config/common.sh@38 -- # kill -SIGINT 156933 00:06:28.522 20:26:21 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:28.522 20:26:21 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:28.522 20:26:21 json_config -- json_config/common.sh@41 -- # kill -0 156933 00:06:28.522 20:26:21 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:29.122 20:26:22 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:29.122 20:26:22 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:29.122 20:26:22 json_config -- json_config/common.sh@41 -- # kill -0 156933 00:06:29.122 20:26:22 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:29.122 20:26:22 json_config -- json_config/common.sh@43 -- # break 00:06:29.122 20:26:22 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:29.122 20:26:22 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:29.122 SPDK target shutdown done 00:06:29.122 20:26:22 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:29.122 INFO: relaunching applications... 00:06:29.122 20:26:22 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:29.122 20:26:22 json_config -- json_config/common.sh@9 -- # local app=target 00:06:29.122 20:26:22 json_config -- json_config/common.sh@10 -- # shift 00:06:29.122 20:26:22 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:29.122 20:26:22 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:29.122 20:26:22 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:29.122 20:26:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:29.122 20:26:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:29.122 20:26:22 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=158607 00:06:29.122 20:26:22 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:29.122 Waiting for target to run... 00:06:29.122 20:26:22 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:29.122 20:26:22 json_config -- json_config/common.sh@25 -- # waitforlisten 158607 /var/tmp/spdk_tgt.sock 00:06:29.123 20:26:22 json_config -- common/autotest_common.sh@835 -- # '[' -z 158607 ']' 00:06:29.123 20:26:22 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:29.123 20:26:22 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.123 20:26:22 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:29.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:29.123 20:26:22 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.123 20:26:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.123 [2024-12-05 20:26:22.442084] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:06:29.123 [2024-12-05 20:26:22.442139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158607 ] 00:06:29.691 [2024-12-05 20:26:22.872773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.691 [2024-12-05 20:26:22.924467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.980 [2024-12-05 20:26:25.958331] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:32.980 [2024-12-05 20:26:25.990672] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:33.238 20:26:26 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.238 20:26:26 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:33.238 20:26:26 json_config -- json_config/common.sh@26 -- # echo '' 00:06:33.238 00:06:33.238 20:26:26 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:33.238 20:26:26 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:33.238 INFO: Checking if target configuration is the same... 00:06:33.238 20:26:26 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:33.238 20:26:26 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:33.238 20:26:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:33.238 + '[' 2 -ne 2 ']' 00:06:33.238 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:33.238 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:33.238 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:33.238 +++ basename /dev/fd/62 00:06:33.238 ++ mktemp /tmp/62.XXX 00:06:33.238 + tmp_file_1=/tmp/62.YTb 00:06:33.238 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:33.238 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:33.238 + tmp_file_2=/tmp/spdk_tgt_config.json.km2 00:06:33.238 + ret=0 00:06:33.238 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:33.806 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:33.806 + diff -u /tmp/62.YTb /tmp/spdk_tgt_config.json.km2 00:06:33.806 + echo 'INFO: JSON config files are the same' 00:06:33.806 INFO: JSON config files are the same 00:06:33.806 + rm /tmp/62.YTb /tmp/spdk_tgt_config.json.km2 00:06:33.806 + exit 0 00:06:33.806 20:26:26 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:33.806 20:26:26 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:33.806 INFO: changing configuration and checking if this can be detected... 00:06:33.806 20:26:26 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:33.806 20:26:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:33.806 20:26:27 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:33.806 20:26:27 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:33.806 20:26:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:33.806 + '[' 2 -ne 2 ']' 00:06:33.806 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:33.806 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:33.806 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:33.806 +++ basename /dev/fd/62 00:06:33.806 ++ mktemp /tmp/62.XXX 00:06:33.806 + tmp_file_1=/tmp/62.Ip7 00:06:33.806 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:33.807 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:33.807 + tmp_file_2=/tmp/spdk_tgt_config.json.Mex 00:06:33.807 + ret=0 00:06:33.807 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:34.376 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:34.376 + diff -u /tmp/62.Ip7 /tmp/spdk_tgt_config.json.Mex 00:06:34.376 + ret=1 00:06:34.376 + echo '=== Start of file: /tmp/62.Ip7 ===' 00:06:34.376 + cat /tmp/62.Ip7 00:06:34.376 + echo '=== End of file: /tmp/62.Ip7 ===' 00:06:34.376 + echo '' 00:06:34.376 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Mex ===' 00:06:34.376 + cat /tmp/spdk_tgt_config.json.Mex 00:06:34.376 + echo '=== End of file: /tmp/spdk_tgt_config.json.Mex ===' 00:06:34.376 + echo '' 00:06:34.376 + rm /tmp/62.Ip7 /tmp/spdk_tgt_config.json.Mex 00:06:34.376 + exit 1 00:06:34.376 20:26:27 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:34.376 INFO: configuration change detected. 00:06:34.376 20:26:27 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:34.376 20:26:27 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:34.376 20:26:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:34.376 20:26:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.377 20:26:27 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:34.377 20:26:27 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:34.377 20:26:27 json_config -- json_config/json_config.sh@324 -- # [[ -n 158607 ]] 00:06:34.377 20:26:27 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:34.377 20:26:27 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:34.377 20:26:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:34.377 20:26:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.377 20:26:27 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:34.377 20:26:27 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:34.377 20:26:27 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:34.377 20:26:27 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:34.377 20:26:27 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:34.377 20:26:27 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:34.377 20:26:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:34.377 20:26:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.377 20:26:27 json_config -- json_config/json_config.sh@330 -- # killprocess 158607 00:06:34.377 20:26:27 json_config -- common/autotest_common.sh@954 -- # '[' -z 158607 ']' 00:06:34.377 20:26:27 json_config -- common/autotest_common.sh@958 -- # kill -0 158607 00:06:34.377 20:26:27 json_config -- common/autotest_common.sh@959 -- # uname 00:06:34.377 20:26:27 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.377 20:26:27 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 158607 00:06:34.377 20:26:27 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.377 20:26:27 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.377 20:26:27 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 158607' 00:06:34.377 killing process with pid 158607 00:06:34.377 20:26:27 json_config -- common/autotest_common.sh@973 -- # kill 158607 00:06:34.377 20:26:27 json_config -- common/autotest_common.sh@978 -- # wait 158607 00:06:36.287 20:26:29 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:36.287 20:26:29 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:36.287 20:26:29 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:36.287 20:26:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.287 20:26:29 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:36.287 20:26:29 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:36.287 INFO: Success 00:06:36.287 00:06:36.287 real 0m15.542s 00:06:36.287 user 0m15.839s 00:06:36.287 sys 0m2.473s 00:06:36.287 20:26:29 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.287 20:26:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.287 ************************************ 00:06:36.287 END TEST json_config 00:06:36.287 ************************************ 00:06:36.287 20:26:29 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:36.287 20:26:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.287 20:26:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.287 20:26:29 -- common/autotest_common.sh@10 -- # set +x 00:06:36.287 ************************************ 00:06:36.287 START TEST json_config_extra_key 00:06:36.287 ************************************ 00:06:36.287 20:26:29 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:36.287 20:26:29 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:36.287 20:26:29 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:06:36.287 20:26:29 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:36.287 20:26:29 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:36.287 20:26:29 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.287 20:26:29 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.287 20:26:29 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.287 20:26:29 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.287 20:26:29 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.287 20:26:29 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.287 20:26:29 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.287 20:26:29 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.287 20:26:29 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.287 20:26:29 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.287 20:26:29 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.287 20:26:29 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:36.287 20:26:29 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:36.287 20:26:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.287 20:26:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.287 20:26:29 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:36.287 20:26:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:36.287 20:26:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.287 20:26:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:36.287 20:26:29 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.287 20:26:29 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:36.287 20:26:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:36.287 20:26:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.287 20:26:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:36.287 20:26:29 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.287 20:26:29 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.287 20:26:29 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.287 20:26:29 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:36.287 20:26:29 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.287 20:26:29 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:36.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.287 --rc genhtml_branch_coverage=1 00:06:36.287 --rc genhtml_function_coverage=1 00:06:36.287 --rc genhtml_legend=1 00:06:36.287 --rc geninfo_all_blocks=1 00:06:36.287 --rc geninfo_unexecuted_blocks=1 00:06:36.287 00:06:36.287 ' 00:06:36.288 20:26:29 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:36.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.288 --rc genhtml_branch_coverage=1 00:06:36.288 --rc genhtml_function_coverage=1 00:06:36.288 --rc genhtml_legend=1 00:06:36.288 --rc geninfo_all_blocks=1 00:06:36.288 --rc geninfo_unexecuted_blocks=1 00:06:36.288 00:06:36.288 ' 00:06:36.288 20:26:29 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:36.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.288 --rc genhtml_branch_coverage=1 00:06:36.288 --rc genhtml_function_coverage=1 00:06:36.288 --rc genhtml_legend=1 00:06:36.288 --rc geninfo_all_blocks=1 00:06:36.288 --rc geninfo_unexecuted_blocks=1 00:06:36.288 00:06:36.288 ' 00:06:36.288 20:26:29 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:36.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.288 --rc genhtml_branch_coverage=1 00:06:36.288 --rc genhtml_function_coverage=1 00:06:36.288 --rc genhtml_legend=1 00:06:36.288 --rc geninfo_all_blocks=1 00:06:36.288 --rc geninfo_unexecuted_blocks=1 00:06:36.288 00:06:36.288 ' 00:06:36.288 20:26:29 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:36.288 20:26:29 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:36.288 20:26:29 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:36.288 20:26:29 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:36.288 20:26:29 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:36.288 20:26:29 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:36.288 20:26:29 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:36.288 20:26:29 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:36.288 20:26:29 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:36.288 20:26:29 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:36.288 20:26:29 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:36.288 20:26:29 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:36.288 20:26:29 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:06:36.288 20:26:29 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:06:36.288 20:26:29 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:36.288 20:26:29 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:36.288 20:26:29 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:36.288 20:26:29 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:36.288 20:26:29 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:36.288 20:26:29 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:36.288 20:26:29 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.288 20:26:29 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.288 20:26:29 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.288 20:26:29 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.288 20:26:29 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.288 20:26:29 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.288 20:26:29 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:36.288 20:26:29 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.288 20:26:29 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:36.288 20:26:29 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:36.288 20:26:29 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:36.288 20:26:29 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:36.288 20:26:29 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:36.288 20:26:29 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:36.288 20:26:29 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:36.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:36.288 20:26:29 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:36.288 20:26:29 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:36.288 20:26:29 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:36.288 20:26:29 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:36.288 20:26:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:36.288 20:26:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:36.288 20:26:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:36.288 20:26:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:36.288 20:26:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:36.288 20:26:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:36.288 20:26:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:36.288 20:26:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:36.288 20:26:29 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:36.288 20:26:29 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:36.288 INFO: launching applications... 00:06:36.288 20:26:29 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:36.288 20:26:29 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:36.288 20:26:29 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:36.288 20:26:29 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:36.288 20:26:29 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:36.288 20:26:29 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:36.288 20:26:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:36.288 20:26:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:36.288 20:26:29 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=159998 00:06:36.288 20:26:29 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:36.288 Waiting for target to run... 00:06:36.288 20:26:29 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 159998 /var/tmp/spdk_tgt.sock 00:06:36.288 20:26:29 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 159998 ']' 00:06:36.288 20:26:29 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:36.288 20:26:29 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:36.288 20:26:29 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.288 20:26:29 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:36.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:36.288 20:26:29 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.288 20:26:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:36.288 [2024-12-05 20:26:29.573626] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:06:36.288 [2024-12-05 20:26:29.573678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159998 ] 00:06:36.548 [2024-12-05 20:26:29.856503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.548 [2024-12-05 20:26:29.887181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.117 20:26:30 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.117 20:26:30 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:37.117 20:26:30 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:37.117 00:06:37.117 20:26:30 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:37.117 INFO: shutting down applications... 00:06:37.117 20:26:30 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:37.117 20:26:30 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:37.117 20:26:30 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:37.117 20:26:30 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 159998 ]] 00:06:37.117 20:26:30 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 159998 00:06:37.117 20:26:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:37.117 20:26:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:37.117 20:26:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 159998 00:06:37.117 20:26:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:37.686 20:26:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:37.686 20:26:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:37.686 20:26:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 159998 00:06:37.686 20:26:30 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:37.686 20:26:30 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:37.686 20:26:30 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:37.687 20:26:30 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:37.687 SPDK target shutdown done 00:06:37.687 20:26:30 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:37.687 Success 00:06:37.687 00:06:37.687 real 0m1.547s 00:06:37.687 user 0m1.315s 00:06:37.687 sys 0m0.381s 00:06:37.687 20:26:30 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.687 20:26:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:37.687 ************************************ 00:06:37.687 END TEST json_config_extra_key 00:06:37.687 ************************************ 00:06:37.687 20:26:30 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:37.687 20:26:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.687 20:26:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.687 20:26:30 -- common/autotest_common.sh@10 -- # set +x 00:06:37.687 ************************************ 00:06:37.687 START TEST alias_rpc 00:06:37.687 ************************************ 00:06:37.687 20:26:30 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:37.687 * Looking for test storage... 00:06:37.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:37.687 20:26:31 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:37.687 20:26:31 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:37.687 20:26:31 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:37.687 20:26:31 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:37.687 20:26:31 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.687 20:26:31 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.687 20:26:31 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.687 20:26:31 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.687 20:26:31 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.687 20:26:31 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.687 20:26:31 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.687 20:26:31 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.687 20:26:31 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.687 20:26:31 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.687 20:26:31 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.687 20:26:31 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:37.687 20:26:31 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:37.687 20:26:31 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.687 20:26:31 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.687 20:26:31 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:37.687 20:26:31 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:37.687 20:26:31 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.687 20:26:31 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:37.687 20:26:31 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.687 20:26:31 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:37.687 20:26:31 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:37.687 20:26:31 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.687 20:26:31 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:37.687 20:26:31 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.687 20:26:31 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.687 20:26:31 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.687 20:26:31 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:37.687 20:26:31 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.687 20:26:31 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:37.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.687 --rc genhtml_branch_coverage=1 00:06:37.687 --rc genhtml_function_coverage=1 00:06:37.687 --rc genhtml_legend=1 00:06:37.687 --rc geninfo_all_blocks=1 00:06:37.687 --rc geninfo_unexecuted_blocks=1 00:06:37.687 00:06:37.687 ' 00:06:37.687 20:26:31 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:37.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.687 --rc genhtml_branch_coverage=1 00:06:37.687 --rc genhtml_function_coverage=1 00:06:37.687 --rc genhtml_legend=1 00:06:37.687 --rc geninfo_all_blocks=1 00:06:37.687 --rc geninfo_unexecuted_blocks=1 00:06:37.687 00:06:37.687 ' 00:06:37.687 20:26:31 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:37.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.687 --rc genhtml_branch_coverage=1 00:06:37.687 --rc genhtml_function_coverage=1 00:06:37.687 --rc genhtml_legend=1 00:06:37.687 --rc geninfo_all_blocks=1 00:06:37.687 --rc geninfo_unexecuted_blocks=1 00:06:37.687 00:06:37.687 ' 00:06:37.687 20:26:31 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:37.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.687 --rc genhtml_branch_coverage=1 00:06:37.687 --rc genhtml_function_coverage=1 00:06:37.687 --rc genhtml_legend=1 00:06:37.687 --rc geninfo_all_blocks=1 00:06:37.687 --rc geninfo_unexecuted_blocks=1 00:06:37.687 00:06:37.687 ' 00:06:37.687 20:26:31 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:37.687 20:26:31 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=160403 00:06:37.687 20:26:31 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 160403 00:06:37.687 20:26:31 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:37.687 20:26:31 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 160403 ']' 00:06:37.687 20:26:31 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.687 20:26:31 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.687 20:26:31 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.946 20:26:31 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.946 20:26:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.946 [2024-12-05 20:26:31.177042] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:06:37.946 [2024-12-05 20:26:31.177099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid160403 ] 00:06:37.946 [2024-12-05 20:26:31.247053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.946 [2024-12-05 20:26:31.284845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.887 20:26:31 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.887 20:26:31 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:38.887 20:26:31 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:38.887 20:26:32 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 160403 00:06:38.887 20:26:32 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 160403 ']' 00:06:38.887 20:26:32 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 160403 00:06:38.887 20:26:32 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:38.887 20:26:32 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.887 20:26:32 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 160403 00:06:38.887 20:26:32 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.887 20:26:32 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.887 20:26:32 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 160403' 00:06:38.887 killing process with pid 160403 00:06:38.887 20:26:32 alias_rpc -- common/autotest_common.sh@973 -- # kill 160403 00:06:38.887 20:26:32 alias_rpc -- common/autotest_common.sh@978 -- # wait 160403 00:06:39.147 00:06:39.147 real 0m1.578s 00:06:39.147 user 0m1.687s 00:06:39.147 sys 0m0.451s 00:06:39.147 20:26:32 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.147 20:26:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.147 ************************************ 00:06:39.147 END TEST alias_rpc 00:06:39.147 ************************************ 00:06:39.147 20:26:32 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:39.147 20:26:32 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:39.147 20:26:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.147 20:26:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.147 20:26:32 -- common/autotest_common.sh@10 -- # set +x 00:06:39.408 ************************************ 00:06:39.408 START TEST spdkcli_tcp 00:06:39.408 ************************************ 00:06:39.408 20:26:32 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:39.408 * Looking for test storage... 00:06:39.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:39.408 20:26:32 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:39.408 20:26:32 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:39.408 20:26:32 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:39.408 20:26:32 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:39.408 20:26:32 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.408 20:26:32 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.408 20:26:32 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.408 20:26:32 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.408 20:26:32 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.408 20:26:32 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.408 20:26:32 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.408 20:26:32 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.408 20:26:32 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.408 20:26:32 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.408 20:26:32 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.408 20:26:32 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:39.408 20:26:32 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:39.408 20:26:32 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.408 20:26:32 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.408 20:26:32 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:39.408 20:26:32 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:39.408 20:26:32 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.408 20:26:32 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:39.408 20:26:32 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.408 20:26:32 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:39.408 20:26:32 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:39.408 20:26:32 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.408 20:26:32 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:39.408 20:26:32 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.408 20:26:32 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.408 20:26:32 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.408 20:26:32 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:39.408 20:26:32 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.408 20:26:32 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:39.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.408 --rc genhtml_branch_coverage=1 00:06:39.408 --rc genhtml_function_coverage=1 00:06:39.408 --rc genhtml_legend=1 00:06:39.408 --rc geninfo_all_blocks=1 00:06:39.408 --rc geninfo_unexecuted_blocks=1 00:06:39.408 00:06:39.408 ' 00:06:39.408 20:26:32 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:39.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.408 --rc genhtml_branch_coverage=1 00:06:39.408 --rc genhtml_function_coverage=1 00:06:39.408 --rc genhtml_legend=1 00:06:39.408 --rc geninfo_all_blocks=1 00:06:39.408 --rc geninfo_unexecuted_blocks=1 00:06:39.408 00:06:39.408 ' 00:06:39.408 20:26:32 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:39.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.408 --rc genhtml_branch_coverage=1 00:06:39.408 --rc genhtml_function_coverage=1 00:06:39.408 --rc genhtml_legend=1 00:06:39.408 --rc geninfo_all_blocks=1 00:06:39.408 --rc geninfo_unexecuted_blocks=1 00:06:39.408 00:06:39.408 ' 00:06:39.408 20:26:32 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:39.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.408 --rc genhtml_branch_coverage=1 00:06:39.408 --rc genhtml_function_coverage=1 00:06:39.408 --rc genhtml_legend=1 00:06:39.408 --rc geninfo_all_blocks=1 00:06:39.408 --rc geninfo_unexecuted_blocks=1 00:06:39.408 00:06:39.409 ' 00:06:39.409 20:26:32 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:39.409 20:26:32 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:39.409 20:26:32 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:39.409 20:26:32 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:39.409 20:26:32 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:39.409 20:26:32 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:39.409 20:26:32 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:39.409 20:26:32 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:39.409 20:26:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:39.409 20:26:32 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=160732 00:06:39.409 20:26:32 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:39.409 20:26:32 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 160732 00:06:39.409 20:26:32 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 160732 ']' 00:06:39.409 20:26:32 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.409 20:26:32 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.409 20:26:32 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.409 20:26:32 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.409 20:26:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:39.409 [2024-12-05 20:26:32.828639] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:06:39.409 [2024-12-05 20:26:32.828687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid160732 ] 00:06:39.668 [2024-12-05 20:26:32.900430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.668 [2024-12-05 20:26:32.941130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.668 [2024-12-05 20:26:32.941131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.237 20:26:33 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.237 20:26:33 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:40.237 20:26:33 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=160800 00:06:40.237 20:26:33 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:40.237 20:26:33 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:40.497 [ 00:06:40.497 "bdev_malloc_delete", 00:06:40.497 "bdev_malloc_create", 00:06:40.497 "bdev_null_resize", 00:06:40.497 "bdev_null_delete", 00:06:40.497 "bdev_null_create", 00:06:40.497 "bdev_nvme_cuse_unregister", 00:06:40.497 "bdev_nvme_cuse_register", 00:06:40.497 "bdev_opal_new_user", 00:06:40.497 "bdev_opal_set_lock_state", 00:06:40.497 "bdev_opal_delete", 00:06:40.497 "bdev_opal_get_info", 00:06:40.497 "bdev_opal_create", 00:06:40.497 "bdev_nvme_opal_revert", 00:06:40.497 "bdev_nvme_opal_init", 00:06:40.497 "bdev_nvme_send_cmd", 00:06:40.497 "bdev_nvme_set_keys", 00:06:40.497 "bdev_nvme_get_path_iostat", 00:06:40.497 "bdev_nvme_get_mdns_discovery_info", 00:06:40.497 "bdev_nvme_stop_mdns_discovery", 00:06:40.497 "bdev_nvme_start_mdns_discovery", 00:06:40.497 "bdev_nvme_set_multipath_policy", 00:06:40.497 "bdev_nvme_set_preferred_path", 00:06:40.497 "bdev_nvme_get_io_paths", 00:06:40.497 "bdev_nvme_remove_error_injection", 00:06:40.497 "bdev_nvme_add_error_injection", 00:06:40.497 "bdev_nvme_get_discovery_info", 00:06:40.497 "bdev_nvme_stop_discovery", 00:06:40.497 "bdev_nvme_start_discovery", 00:06:40.497 "bdev_nvme_get_controller_health_info", 00:06:40.497 "bdev_nvme_disable_controller", 00:06:40.497 "bdev_nvme_enable_controller", 00:06:40.497 "bdev_nvme_reset_controller", 00:06:40.497 "bdev_nvme_get_transport_statistics", 00:06:40.497 "bdev_nvme_apply_firmware", 00:06:40.497 "bdev_nvme_detach_controller", 00:06:40.497 "bdev_nvme_get_controllers", 00:06:40.497 "bdev_nvme_attach_controller", 00:06:40.497 "bdev_nvme_set_hotplug", 00:06:40.497 "bdev_nvme_set_options", 00:06:40.497 "bdev_passthru_delete", 00:06:40.497 "bdev_passthru_create", 00:06:40.497 "bdev_lvol_set_parent_bdev", 00:06:40.497 "bdev_lvol_set_parent", 00:06:40.497 "bdev_lvol_check_shallow_copy", 00:06:40.497 "bdev_lvol_start_shallow_copy", 00:06:40.497 "bdev_lvol_grow_lvstore", 00:06:40.497 "bdev_lvol_get_lvols", 00:06:40.497 "bdev_lvol_get_lvstores", 00:06:40.497 "bdev_lvol_delete", 00:06:40.497 "bdev_lvol_set_read_only", 00:06:40.497 "bdev_lvol_resize", 00:06:40.497 "bdev_lvol_decouple_parent", 00:06:40.497 "bdev_lvol_inflate", 00:06:40.497 "bdev_lvol_rename", 00:06:40.498 "bdev_lvol_clone_bdev", 00:06:40.498 "bdev_lvol_clone", 00:06:40.498 "bdev_lvol_snapshot", 00:06:40.498 "bdev_lvol_create", 00:06:40.498 "bdev_lvol_delete_lvstore", 00:06:40.498 "bdev_lvol_rename_lvstore", 00:06:40.498 "bdev_lvol_create_lvstore", 00:06:40.498 "bdev_raid_set_options", 00:06:40.498 "bdev_raid_remove_base_bdev", 00:06:40.498 "bdev_raid_add_base_bdev", 00:06:40.498 "bdev_raid_delete", 00:06:40.498 "bdev_raid_create", 00:06:40.498 "bdev_raid_get_bdevs", 00:06:40.498 "bdev_error_inject_error", 00:06:40.498 "bdev_error_delete", 00:06:40.498 "bdev_error_create", 00:06:40.498 "bdev_split_delete", 00:06:40.498 "bdev_split_create", 00:06:40.498 "bdev_delay_delete", 00:06:40.498 "bdev_delay_create", 00:06:40.498 "bdev_delay_update_latency", 00:06:40.498 "bdev_zone_block_delete", 00:06:40.498 "bdev_zone_block_create", 00:06:40.498 "blobfs_create", 00:06:40.498 "blobfs_detect", 00:06:40.498 "blobfs_set_cache_size", 00:06:40.498 "bdev_aio_delete", 00:06:40.498 "bdev_aio_rescan", 00:06:40.498 "bdev_aio_create", 00:06:40.498 "bdev_ftl_set_property", 00:06:40.498 "bdev_ftl_get_properties", 00:06:40.498 "bdev_ftl_get_stats", 00:06:40.498 "bdev_ftl_unmap", 00:06:40.498 "bdev_ftl_unload", 00:06:40.498 "bdev_ftl_delete", 00:06:40.498 "bdev_ftl_load", 00:06:40.498 "bdev_ftl_create", 00:06:40.498 "bdev_virtio_attach_controller", 00:06:40.498 "bdev_virtio_scsi_get_devices", 00:06:40.498 "bdev_virtio_detach_controller", 00:06:40.498 "bdev_virtio_blk_set_hotplug", 00:06:40.498 "bdev_iscsi_delete", 00:06:40.498 "bdev_iscsi_create", 00:06:40.498 "bdev_iscsi_set_options", 00:06:40.498 "accel_error_inject_error", 00:06:40.498 "ioat_scan_accel_module", 00:06:40.498 "dsa_scan_accel_module", 00:06:40.498 "iaa_scan_accel_module", 00:06:40.498 "vfu_virtio_create_fs_endpoint", 00:06:40.498 "vfu_virtio_create_scsi_endpoint", 00:06:40.498 "vfu_virtio_scsi_remove_target", 00:06:40.498 "vfu_virtio_scsi_add_target", 00:06:40.498 "vfu_virtio_create_blk_endpoint", 00:06:40.498 "vfu_virtio_delete_endpoint", 00:06:40.498 "keyring_file_remove_key", 00:06:40.498 "keyring_file_add_key", 00:06:40.498 "keyring_linux_set_options", 00:06:40.498 "fsdev_aio_delete", 00:06:40.498 "fsdev_aio_create", 00:06:40.498 "iscsi_get_histogram", 00:06:40.498 "iscsi_enable_histogram", 00:06:40.498 "iscsi_set_options", 00:06:40.498 "iscsi_get_auth_groups", 00:06:40.498 "iscsi_auth_group_remove_secret", 00:06:40.498 "iscsi_auth_group_add_secret", 00:06:40.498 "iscsi_delete_auth_group", 00:06:40.498 "iscsi_create_auth_group", 00:06:40.498 "iscsi_set_discovery_auth", 00:06:40.498 "iscsi_get_options", 00:06:40.498 "iscsi_target_node_request_logout", 00:06:40.498 "iscsi_target_node_set_redirect", 00:06:40.498 "iscsi_target_node_set_auth", 00:06:40.498 "iscsi_target_node_add_lun", 00:06:40.498 "iscsi_get_stats", 00:06:40.498 "iscsi_get_connections", 00:06:40.498 "iscsi_portal_group_set_auth", 00:06:40.498 "iscsi_start_portal_group", 00:06:40.498 "iscsi_delete_portal_group", 00:06:40.498 "iscsi_create_portal_group", 00:06:40.498 "iscsi_get_portal_groups", 00:06:40.498 "iscsi_delete_target_node", 00:06:40.498 "iscsi_target_node_remove_pg_ig_maps", 00:06:40.498 "iscsi_target_node_add_pg_ig_maps", 00:06:40.498 "iscsi_create_target_node", 00:06:40.498 "iscsi_get_target_nodes", 00:06:40.498 "iscsi_delete_initiator_group", 00:06:40.498 "iscsi_initiator_group_remove_initiators", 00:06:40.498 "iscsi_initiator_group_add_initiators", 00:06:40.498 "iscsi_create_initiator_group", 00:06:40.498 "iscsi_get_initiator_groups", 00:06:40.498 "nvmf_set_crdt", 00:06:40.498 "nvmf_set_config", 00:06:40.498 "nvmf_set_max_subsystems", 00:06:40.498 "nvmf_stop_mdns_prr", 00:06:40.498 "nvmf_publish_mdns_prr", 00:06:40.498 "nvmf_subsystem_get_listeners", 00:06:40.498 "nvmf_subsystem_get_qpairs", 00:06:40.498 "nvmf_subsystem_get_controllers", 00:06:40.498 "nvmf_get_stats", 00:06:40.498 "nvmf_get_transports", 00:06:40.498 "nvmf_create_transport", 00:06:40.498 "nvmf_get_targets", 00:06:40.498 "nvmf_delete_target", 00:06:40.498 "nvmf_create_target", 00:06:40.498 "nvmf_subsystem_allow_any_host", 00:06:40.498 "nvmf_subsystem_set_keys", 00:06:40.498 "nvmf_subsystem_remove_host", 00:06:40.498 "nvmf_subsystem_add_host", 00:06:40.498 "nvmf_ns_remove_host", 00:06:40.498 "nvmf_ns_add_host", 00:06:40.498 "nvmf_subsystem_remove_ns", 00:06:40.498 "nvmf_subsystem_set_ns_ana_group", 00:06:40.498 "nvmf_subsystem_add_ns", 00:06:40.498 "nvmf_subsystem_listener_set_ana_state", 00:06:40.498 "nvmf_discovery_get_referrals", 00:06:40.498 "nvmf_discovery_remove_referral", 00:06:40.498 "nvmf_discovery_add_referral", 00:06:40.498 "nvmf_subsystem_remove_listener", 00:06:40.498 "nvmf_subsystem_add_listener", 00:06:40.498 "nvmf_delete_subsystem", 00:06:40.498 "nvmf_create_subsystem", 00:06:40.498 "nvmf_get_subsystems", 00:06:40.498 "env_dpdk_get_mem_stats", 00:06:40.498 "nbd_get_disks", 00:06:40.498 "nbd_stop_disk", 00:06:40.498 "nbd_start_disk", 00:06:40.498 "ublk_recover_disk", 00:06:40.498 "ublk_get_disks", 00:06:40.498 "ublk_stop_disk", 00:06:40.498 "ublk_start_disk", 00:06:40.498 "ublk_destroy_target", 00:06:40.498 "ublk_create_target", 00:06:40.498 "virtio_blk_create_transport", 00:06:40.498 "virtio_blk_get_transports", 00:06:40.498 "vhost_controller_set_coalescing", 00:06:40.498 "vhost_get_controllers", 00:06:40.498 "vhost_delete_controller", 00:06:40.498 "vhost_create_blk_controller", 00:06:40.498 "vhost_scsi_controller_remove_target", 00:06:40.498 "vhost_scsi_controller_add_target", 00:06:40.498 "vhost_start_scsi_controller", 00:06:40.498 "vhost_create_scsi_controller", 00:06:40.498 "thread_set_cpumask", 00:06:40.498 "scheduler_set_options", 00:06:40.498 "framework_get_governor", 00:06:40.498 "framework_get_scheduler", 00:06:40.498 "framework_set_scheduler", 00:06:40.498 "framework_get_reactors", 00:06:40.498 "thread_get_io_channels", 00:06:40.498 "thread_get_pollers", 00:06:40.498 "thread_get_stats", 00:06:40.498 "framework_monitor_context_switch", 00:06:40.498 "spdk_kill_instance", 00:06:40.498 "log_enable_timestamps", 00:06:40.498 "log_get_flags", 00:06:40.498 "log_clear_flag", 00:06:40.498 "log_set_flag", 00:06:40.498 "log_get_level", 00:06:40.498 "log_set_level", 00:06:40.498 "log_get_print_level", 00:06:40.498 "log_set_print_level", 00:06:40.498 "framework_enable_cpumask_locks", 00:06:40.498 "framework_disable_cpumask_locks", 00:06:40.498 "framework_wait_init", 00:06:40.498 "framework_start_init", 00:06:40.498 "scsi_get_devices", 00:06:40.498 "bdev_get_histogram", 00:06:40.498 "bdev_enable_histogram", 00:06:40.498 "bdev_set_qos_limit", 00:06:40.498 "bdev_set_qd_sampling_period", 00:06:40.498 "bdev_get_bdevs", 00:06:40.498 "bdev_reset_iostat", 00:06:40.498 "bdev_get_iostat", 00:06:40.498 "bdev_examine", 00:06:40.498 "bdev_wait_for_examine", 00:06:40.498 "bdev_set_options", 00:06:40.498 "accel_get_stats", 00:06:40.498 "accel_set_options", 00:06:40.498 "accel_set_driver", 00:06:40.498 "accel_crypto_key_destroy", 00:06:40.498 "accel_crypto_keys_get", 00:06:40.499 "accel_crypto_key_create", 00:06:40.499 "accel_assign_opc", 00:06:40.499 "accel_get_module_info", 00:06:40.499 "accel_get_opc_assignments", 00:06:40.499 "vmd_rescan", 00:06:40.499 "vmd_remove_device", 00:06:40.499 "vmd_enable", 00:06:40.499 "sock_get_default_impl", 00:06:40.499 "sock_set_default_impl", 00:06:40.499 "sock_impl_set_options", 00:06:40.499 "sock_impl_get_options", 00:06:40.499 "iobuf_get_stats", 00:06:40.499 "iobuf_set_options", 00:06:40.499 "keyring_get_keys", 00:06:40.499 "vfu_tgt_set_base_path", 00:06:40.499 "framework_get_pci_devices", 00:06:40.499 "framework_get_config", 00:06:40.499 "framework_get_subsystems", 00:06:40.499 "fsdev_set_opts", 00:06:40.499 "fsdev_get_opts", 00:06:40.499 "trace_get_info", 00:06:40.499 "trace_get_tpoint_group_mask", 00:06:40.499 "trace_disable_tpoint_group", 00:06:40.499 "trace_enable_tpoint_group", 00:06:40.499 "trace_clear_tpoint_mask", 00:06:40.499 "trace_set_tpoint_mask", 00:06:40.499 "notify_get_notifications", 00:06:40.499 "notify_get_types", 00:06:40.499 "spdk_get_version", 00:06:40.499 "rpc_get_methods" 00:06:40.499 ] 00:06:40.499 20:26:33 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:40.499 20:26:33 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:40.499 20:26:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.499 20:26:33 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:40.499 20:26:33 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 160732 00:06:40.499 20:26:33 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 160732 ']' 00:06:40.499 20:26:33 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 160732 00:06:40.499 20:26:33 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:40.499 20:26:33 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.499 20:26:33 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 160732 00:06:40.499 20:26:33 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.499 20:26:33 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.499 20:26:33 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 160732' 00:06:40.499 killing process with pid 160732 00:06:40.499 20:26:33 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 160732 00:06:40.499 20:26:33 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 160732 00:06:41.069 00:06:41.069 real 0m1.617s 00:06:41.069 user 0m3.023s 00:06:41.069 sys 0m0.460s 00:06:41.069 20:26:34 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.069 20:26:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:41.069 ************************************ 00:06:41.069 END TEST spdkcli_tcp 00:06:41.069 ************************************ 00:06:41.069 20:26:34 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:41.069 20:26:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.069 20:26:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.069 20:26:34 -- common/autotest_common.sh@10 -- # set +x 00:06:41.069 ************************************ 00:06:41.069 START TEST dpdk_mem_utility 00:06:41.069 ************************************ 00:06:41.069 20:26:34 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:41.069 * Looking for test storage... 00:06:41.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:41.069 20:26:34 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:41.069 20:26:34 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:06:41.069 20:26:34 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:41.069 20:26:34 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:41.069 20:26:34 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.069 20:26:34 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.069 20:26:34 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.069 20:26:34 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.069 20:26:34 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.069 20:26:34 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.069 20:26:34 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.069 20:26:34 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.069 20:26:34 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.069 20:26:34 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.069 20:26:34 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.069 20:26:34 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:41.069 20:26:34 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:41.069 20:26:34 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.069 20:26:34 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.069 20:26:34 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:41.069 20:26:34 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:41.069 20:26:34 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.069 20:26:34 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:41.069 20:26:34 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.069 20:26:34 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:41.069 20:26:34 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:41.069 20:26:34 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.069 20:26:34 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:41.069 20:26:34 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.069 20:26:34 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.069 20:26:34 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.069 20:26:34 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:41.069 20:26:34 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.069 20:26:34 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:41.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.069 --rc genhtml_branch_coverage=1 00:06:41.069 --rc genhtml_function_coverage=1 00:06:41.069 --rc genhtml_legend=1 00:06:41.069 --rc geninfo_all_blocks=1 00:06:41.069 --rc geninfo_unexecuted_blocks=1 00:06:41.069 00:06:41.069 ' 00:06:41.069 20:26:34 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:41.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.069 --rc genhtml_branch_coverage=1 00:06:41.069 --rc genhtml_function_coverage=1 00:06:41.069 --rc genhtml_legend=1 00:06:41.069 --rc geninfo_all_blocks=1 00:06:41.069 --rc geninfo_unexecuted_blocks=1 00:06:41.069 00:06:41.069 ' 00:06:41.069 20:26:34 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:41.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.069 --rc genhtml_branch_coverage=1 00:06:41.069 --rc genhtml_function_coverage=1 00:06:41.069 --rc genhtml_legend=1 00:06:41.069 --rc geninfo_all_blocks=1 00:06:41.069 --rc geninfo_unexecuted_blocks=1 00:06:41.069 00:06:41.069 ' 00:06:41.069 20:26:34 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:41.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.069 --rc genhtml_branch_coverage=1 00:06:41.069 --rc genhtml_function_coverage=1 00:06:41.069 --rc genhtml_legend=1 00:06:41.069 --rc geninfo_all_blocks=1 00:06:41.069 --rc geninfo_unexecuted_blocks=1 00:06:41.069 00:06:41.069 ' 00:06:41.069 20:26:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:41.069 20:26:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=161076 00:06:41.069 20:26:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 161076 00:06:41.069 20:26:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:41.069 20:26:34 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 161076 ']' 00:06:41.069 20:26:34 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.069 20:26:34 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.069 20:26:34 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.069 20:26:34 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.069 20:26:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:41.069 [2024-12-05 20:26:34.504247] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:06:41.069 [2024-12-05 20:26:34.504291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161076 ] 00:06:41.329 [2024-12-05 20:26:34.573678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.329 [2024-12-05 20:26:34.610855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.898 20:26:35 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.898 20:26:35 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:41.898 20:26:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:41.898 20:26:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:41.898 20:26:35 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.898 20:26:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:41.898 { 00:06:41.898 "filename": "/tmp/spdk_mem_dump.txt" 00:06:41.898 } 00:06:41.898 20:26:35 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.898 20:26:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:42.157 DPDK memory size 818.000000 MiB in 1 heap(s) 00:06:42.157 1 heaps totaling size 818.000000 MiB 00:06:42.157 size: 818.000000 MiB heap id: 0 00:06:42.157 end heaps---------- 00:06:42.157 9 mempools totaling size 603.782043 MiB 00:06:42.157 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:42.158 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:42.158 size: 100.555481 MiB name: bdev_io_161076 00:06:42.158 size: 50.003479 MiB name: msgpool_161076 00:06:42.158 size: 36.509338 MiB name: fsdev_io_161076 00:06:42.158 size: 21.763794 MiB name: PDU_Pool 00:06:42.158 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:42.158 size: 4.133484 MiB name: evtpool_161076 00:06:42.158 size: 0.026123 MiB name: Session_Pool 00:06:42.158 end mempools------- 00:06:42.158 6 memzones totaling size 4.142822 MiB 00:06:42.158 size: 1.000366 MiB name: RG_ring_0_161076 00:06:42.158 size: 1.000366 MiB name: RG_ring_1_161076 00:06:42.158 size: 1.000366 MiB name: RG_ring_4_161076 00:06:42.158 size: 1.000366 MiB name: RG_ring_5_161076 00:06:42.158 size: 0.125366 MiB name: RG_ring_2_161076 00:06:42.158 size: 0.015991 MiB name: RG_ring_3_161076 00:06:42.158 end memzones------- 00:06:42.158 20:26:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:42.158 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:42.158 list of free elements. size: 10.852478 MiB 00:06:42.158 element at address: 0x200019200000 with size: 0.999878 MiB 00:06:42.158 element at address: 0x200019400000 with size: 0.999878 MiB 00:06:42.158 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:42.158 element at address: 0x200032000000 with size: 0.994446 MiB 00:06:42.158 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:42.158 element at address: 0x200012c00000 with size: 0.944275 MiB 00:06:42.158 element at address: 0x200019600000 with size: 0.936584 MiB 00:06:42.158 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:42.158 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:06:42.158 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:42.158 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:42.158 element at address: 0x200019800000 with size: 0.485657 MiB 00:06:42.158 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:42.158 element at address: 0x200028200000 with size: 0.410034 MiB 00:06:42.158 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:42.158 list of standard malloc elements. size: 199.218628 MiB 00:06:42.158 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:42.158 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:42.158 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:42.158 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:06:42.158 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:06:42.158 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:42.158 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:06:42.158 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:42.158 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:06:42.158 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:42.158 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:42.158 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:42.158 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:42.158 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:42.158 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:42.158 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:42.158 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:42.158 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:42.158 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:42.158 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:42.158 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:42.158 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:42.158 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:42.158 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:42.158 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:42.158 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:42.158 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:42.158 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:42.158 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:42.158 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:42.158 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:42.158 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:42.158 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:42.158 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:06:42.158 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:06:42.158 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:06:42.158 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:06:42.158 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:06:42.158 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:06:42.158 element at address: 0x200028268f80 with size: 0.000183 MiB 00:06:42.158 element at address: 0x200028269040 with size: 0.000183 MiB 00:06:42.158 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:06:42.158 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:06:42.158 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:06:42.158 list of memzone associated elements. size: 607.928894 MiB 00:06:42.158 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:06:42.158 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:42.158 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:06:42.158 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:42.158 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:06:42.158 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_161076_0 00:06:42.158 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:42.158 associated memzone info: size: 48.002930 MiB name: MP_msgpool_161076_0 00:06:42.158 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:42.158 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_161076_0 00:06:42.158 element at address: 0x2000199be940 with size: 20.255554 MiB 00:06:42.158 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:42.158 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:06:42.158 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:42.158 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:42.158 associated memzone info: size: 3.000122 MiB name: MP_evtpool_161076_0 00:06:42.158 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:42.158 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_161076 00:06:42.158 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:42.158 associated memzone info: size: 1.007996 MiB name: MP_evtpool_161076 00:06:42.158 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:42.158 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:42.158 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:06:42.158 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:42.158 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:42.158 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:42.158 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:42.158 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:42.158 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:42.158 associated memzone info: size: 1.000366 MiB name: RG_ring_0_161076 00:06:42.158 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:42.158 associated memzone info: size: 1.000366 MiB name: RG_ring_1_161076 00:06:42.158 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:06:42.158 associated memzone info: size: 1.000366 MiB name: RG_ring_4_161076 00:06:42.158 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:06:42.158 associated memzone info: size: 1.000366 MiB name: RG_ring_5_161076 00:06:42.158 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:42.158 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_161076 00:06:42.158 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:42.158 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_161076 00:06:42.158 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:42.158 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:42.158 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:42.158 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:42.158 element at address: 0x20001987c540 with size: 0.250488 MiB 00:06:42.158 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:42.158 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:42.158 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_161076 00:06:42.158 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:42.158 associated memzone info: size: 0.125366 MiB name: RG_ring_2_161076 00:06:42.158 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:42.158 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:42.158 element at address: 0x200028269100 with size: 0.023743 MiB 00:06:42.158 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:42.158 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:42.158 associated memzone info: size: 0.015991 MiB name: RG_ring_3_161076 00:06:42.158 element at address: 0x20002826f240 with size: 0.002441 MiB 00:06:42.158 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:42.158 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:42.158 associated memzone info: size: 0.000183 MiB name: MP_msgpool_161076 00:06:42.158 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:42.158 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_161076 00:06:42.158 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:42.158 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_161076 00:06:42.158 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:06:42.158 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:42.158 20:26:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:42.158 20:26:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 161076 00:06:42.159 20:26:35 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 161076 ']' 00:06:42.159 20:26:35 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 161076 00:06:42.159 20:26:35 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:42.159 20:26:35 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.159 20:26:35 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 161076 00:06:42.159 20:26:35 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.159 20:26:35 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.159 20:26:35 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 161076' 00:06:42.159 killing process with pid 161076 00:06:42.159 20:26:35 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 161076 00:06:42.159 20:26:35 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 161076 00:06:42.418 00:06:42.418 real 0m1.457s 00:06:42.418 user 0m1.503s 00:06:42.418 sys 0m0.424s 00:06:42.418 20:26:35 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.418 20:26:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:42.418 ************************************ 00:06:42.418 END TEST dpdk_mem_utility 00:06:42.418 ************************************ 00:06:42.418 20:26:35 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:42.418 20:26:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.418 20:26:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.418 20:26:35 -- common/autotest_common.sh@10 -- # set +x 00:06:42.418 ************************************ 00:06:42.418 START TEST event 00:06:42.418 ************************************ 00:06:42.418 20:26:35 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:42.678 * Looking for test storage... 00:06:42.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:42.678 20:26:35 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:42.678 20:26:35 event -- common/autotest_common.sh@1711 -- # lcov --version 00:06:42.678 20:26:35 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:42.678 20:26:35 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:42.678 20:26:35 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.678 20:26:35 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.678 20:26:35 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.678 20:26:35 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.678 20:26:35 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.678 20:26:35 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.678 20:26:35 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.678 20:26:35 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.678 20:26:35 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.678 20:26:35 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.678 20:26:35 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.678 20:26:35 event -- scripts/common.sh@344 -- # case "$op" in 00:06:42.678 20:26:35 event -- scripts/common.sh@345 -- # : 1 00:06:42.678 20:26:35 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.678 20:26:35 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.678 20:26:35 event -- scripts/common.sh@365 -- # decimal 1 00:06:42.678 20:26:35 event -- scripts/common.sh@353 -- # local d=1 00:06:42.678 20:26:35 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.678 20:26:35 event -- scripts/common.sh@355 -- # echo 1 00:06:42.678 20:26:35 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.678 20:26:35 event -- scripts/common.sh@366 -- # decimal 2 00:06:42.678 20:26:35 event -- scripts/common.sh@353 -- # local d=2 00:06:42.678 20:26:35 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.678 20:26:35 event -- scripts/common.sh@355 -- # echo 2 00:06:42.678 20:26:35 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.678 20:26:35 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.678 20:26:35 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.678 20:26:35 event -- scripts/common.sh@368 -- # return 0 00:06:42.678 20:26:35 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.678 20:26:35 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:42.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.678 --rc genhtml_branch_coverage=1 00:06:42.678 --rc genhtml_function_coverage=1 00:06:42.678 --rc genhtml_legend=1 00:06:42.678 --rc geninfo_all_blocks=1 00:06:42.678 --rc geninfo_unexecuted_blocks=1 00:06:42.678 00:06:42.678 ' 00:06:42.678 20:26:35 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:42.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.678 --rc genhtml_branch_coverage=1 00:06:42.678 --rc genhtml_function_coverage=1 00:06:42.678 --rc genhtml_legend=1 00:06:42.678 --rc geninfo_all_blocks=1 00:06:42.678 --rc geninfo_unexecuted_blocks=1 00:06:42.678 00:06:42.678 ' 00:06:42.678 20:26:35 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:42.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.678 --rc genhtml_branch_coverage=1 00:06:42.678 --rc genhtml_function_coverage=1 00:06:42.678 --rc genhtml_legend=1 00:06:42.678 --rc geninfo_all_blocks=1 00:06:42.678 --rc geninfo_unexecuted_blocks=1 00:06:42.678 00:06:42.678 ' 00:06:42.678 20:26:35 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:42.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.678 --rc genhtml_branch_coverage=1 00:06:42.679 --rc genhtml_function_coverage=1 00:06:42.679 --rc genhtml_legend=1 00:06:42.679 --rc geninfo_all_blocks=1 00:06:42.679 --rc geninfo_unexecuted_blocks=1 00:06:42.679 00:06:42.679 ' 00:06:42.679 20:26:35 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:42.679 20:26:35 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:42.679 20:26:35 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:42.679 20:26:35 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:42.679 20:26:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.679 20:26:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:42.679 ************************************ 00:06:42.679 START TEST event_perf 00:06:42.679 ************************************ 00:06:42.679 20:26:36 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:42.679 Running I/O for 1 seconds...[2024-12-05 20:26:36.039549] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:06:42.679 [2024-12-05 20:26:36.039613] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161410 ] 00:06:42.679 [2024-12-05 20:26:36.114690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.938 [2024-12-05 20:26:36.155664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.938 [2024-12-05 20:26:36.155779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.938 [2024-12-05 20:26:36.155889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.938 Running I/O for 1 seconds...[2024-12-05 20:26:36.155890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.890 00:06:43.890 lcore 0: 209275 00:06:43.890 lcore 1: 209275 00:06:43.890 lcore 2: 209274 00:06:43.890 lcore 3: 209275 00:06:43.890 done. 00:06:43.890 00:06:43.890 real 0m1.174s 00:06:43.890 user 0m4.095s 00:06:43.890 sys 0m0.076s 00:06:43.890 20:26:37 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.890 20:26:37 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:43.890 ************************************ 00:06:43.890 END TEST event_perf 00:06:43.890 ************************************ 00:06:43.890 20:26:37 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:43.890 20:26:37 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:43.890 20:26:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.890 20:26:37 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.890 ************************************ 00:06:43.890 START TEST event_reactor 00:06:43.890 ************************************ 00:06:43.890 20:26:37 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:43.890 [2024-12-05 20:26:37.284978] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:06:43.890 [2024-12-05 20:26:37.285038] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161693 ] 00:06:44.149 [2024-12-05 20:26:37.360386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.149 [2024-12-05 20:26:37.397027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.085 test_start 00:06:45.085 oneshot 00:06:45.085 tick 100 00:06:45.085 tick 100 00:06:45.085 tick 250 00:06:45.085 tick 100 00:06:45.085 tick 100 00:06:45.085 tick 250 00:06:45.085 tick 100 00:06:45.085 tick 500 00:06:45.085 tick 100 00:06:45.085 tick 100 00:06:45.085 tick 250 00:06:45.085 tick 100 00:06:45.085 tick 100 00:06:45.085 test_end 00:06:45.085 00:06:45.085 real 0m1.168s 00:06:45.085 user 0m1.090s 00:06:45.085 sys 0m0.074s 00:06:45.085 20:26:38 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.085 20:26:38 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:45.085 ************************************ 00:06:45.085 END TEST event_reactor 00:06:45.085 ************************************ 00:06:45.085 20:26:38 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:45.085 20:26:38 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:45.085 20:26:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.085 20:26:38 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.085 ************************************ 00:06:45.085 START TEST event_reactor_perf 00:06:45.085 ************************************ 00:06:45.085 20:26:38 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:45.085 [2024-12-05 20:26:38.524142] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:06:45.085 [2024-12-05 20:26:38.524210] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161971 ] 00:06:45.345 [2024-12-05 20:26:38.603530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.345 [2024-12-05 20:26:38.640805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.283 test_start 00:06:46.283 test_end 00:06:46.283 Performance: 556870 events per second 00:06:46.283 00:06:46.283 real 0m1.173s 00:06:46.283 user 0m1.094s 00:06:46.283 sys 0m0.075s 00:06:46.283 20:26:39 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.283 20:26:39 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:46.283 ************************************ 00:06:46.283 END TEST event_reactor_perf 00:06:46.283 ************************************ 00:06:46.283 20:26:39 event -- event/event.sh@49 -- # uname -s 00:06:46.283 20:26:39 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:46.283 20:26:39 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:46.283 20:26:39 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.283 20:26:39 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.283 20:26:39 event -- common/autotest_common.sh@10 -- # set +x 00:06:46.544 ************************************ 00:06:46.544 START TEST event_scheduler 00:06:46.544 ************************************ 00:06:46.544 20:26:39 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:46.544 * Looking for test storage... 00:06:46.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:46.544 20:26:39 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:46.544 20:26:39 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:06:46.544 20:26:39 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:46.544 20:26:39 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:46.544 20:26:39 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.544 20:26:39 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.544 20:26:39 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.544 20:26:39 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.544 20:26:39 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.544 20:26:39 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.544 20:26:39 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.544 20:26:39 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.544 20:26:39 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.544 20:26:39 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.544 20:26:39 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.544 20:26:39 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:46.544 20:26:39 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:46.544 20:26:39 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.544 20:26:39 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.544 20:26:39 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:46.544 20:26:39 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:46.544 20:26:39 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.544 20:26:39 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:46.544 20:26:39 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.544 20:26:39 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:46.544 20:26:39 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:46.544 20:26:39 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.544 20:26:39 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:46.544 20:26:39 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.544 20:26:39 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.544 20:26:39 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.544 20:26:39 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:46.544 20:26:39 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.544 20:26:39 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:46.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.544 --rc genhtml_branch_coverage=1 00:06:46.544 --rc genhtml_function_coverage=1 00:06:46.544 --rc genhtml_legend=1 00:06:46.544 --rc geninfo_all_blocks=1 00:06:46.544 --rc geninfo_unexecuted_blocks=1 00:06:46.544 00:06:46.544 ' 00:06:46.544 20:26:39 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:46.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.544 --rc genhtml_branch_coverage=1 00:06:46.544 --rc genhtml_function_coverage=1 00:06:46.544 --rc genhtml_legend=1 00:06:46.544 --rc geninfo_all_blocks=1 00:06:46.544 --rc geninfo_unexecuted_blocks=1 00:06:46.544 00:06:46.544 ' 00:06:46.544 20:26:39 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:46.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.544 --rc genhtml_branch_coverage=1 00:06:46.544 --rc genhtml_function_coverage=1 00:06:46.544 --rc genhtml_legend=1 00:06:46.544 --rc geninfo_all_blocks=1 00:06:46.544 --rc geninfo_unexecuted_blocks=1 00:06:46.544 00:06:46.544 ' 00:06:46.544 20:26:39 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:46.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.544 --rc genhtml_branch_coverage=1 00:06:46.544 --rc genhtml_function_coverage=1 00:06:46.544 --rc genhtml_legend=1 00:06:46.544 --rc geninfo_all_blocks=1 00:06:46.544 --rc geninfo_unexecuted_blocks=1 00:06:46.544 00:06:46.544 ' 00:06:46.544 20:26:39 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:46.544 20:26:39 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=162291 00:06:46.544 20:26:39 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:46.544 20:26:39 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:46.544 20:26:39 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 162291 00:06:46.544 20:26:39 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 162291 ']' 00:06:46.544 20:26:39 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.544 20:26:39 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.544 20:26:39 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.544 20:26:39 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.544 20:26:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.544 [2024-12-05 20:26:39.971615] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:06:46.544 [2024-12-05 20:26:39.971655] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162291 ] 00:06:46.804 [2024-12-05 20:26:40.044102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:46.804 [2024-12-05 20:26:40.090961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.804 [2024-12-05 20:26:40.090988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.804 [2024-12-05 20:26:40.091089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.804 [2024-12-05 20:26:40.091089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:47.374 20:26:40 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.374 20:26:40 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:47.374 20:26:40 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:47.374 20:26:40 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.374 20:26:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:47.374 [2024-12-05 20:26:40.793617] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:47.374 [2024-12-05 20:26:40.793634] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:47.374 [2024-12-05 20:26:40.793642] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:47.374 [2024-12-05 20:26:40.793647] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:47.374 [2024-12-05 20:26:40.793652] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:47.374 20:26:40 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.374 20:26:40 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:47.374 20:26:40 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.374 20:26:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:47.633 [2024-12-05 20:26:40.872410] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:47.633 20:26:40 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.633 20:26:40 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:47.633 20:26:40 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.633 20:26:40 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.633 20:26:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:47.633 ************************************ 00:06:47.633 START TEST scheduler_create_thread 00:06:47.633 ************************************ 00:06:47.633 20:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:47.633 20:26:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:47.633 20:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.633 20:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.633 2 00:06:47.633 20:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.633 20:26:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:47.633 20:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.633 20:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.633 3 00:06:47.633 20:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.633 20:26:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:47.633 20:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.633 20:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.633 4 00:06:47.633 20:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.633 20:26:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:47.633 20:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.633 20:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.633 5 00:06:47.633 20:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.633 20:26:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:47.633 20:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.633 20:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.633 6 00:06:47.633 20:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.633 20:26:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:47.634 20:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.634 20:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.634 7 00:06:47.634 20:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.634 20:26:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:47.634 20:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.634 20:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.634 8 00:06:47.634 20:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.634 20:26:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:47.634 20:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.634 20:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.634 9 00:06:47.634 20:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.634 20:26:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:47.634 20:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.634 20:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.634 10 00:06:47.634 20:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.634 20:26:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:47.634 20:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.634 20:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.634 20:26:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.634 20:26:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:47.634 20:26:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:47.634 20:26:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.634 20:26:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.572 20:26:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.572 20:26:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:48.572 20:26:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.572 20:26:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.952 20:26:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.952 20:26:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:49.952 20:26:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:49.952 20:26:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.952 20:26:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.888 20:26:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.888 00:06:50.888 real 0m3.383s 00:06:50.888 user 0m0.023s 00:06:50.888 sys 0m0.007s 00:06:50.888 20:26:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.888 20:26:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.888 ************************************ 00:06:50.888 END TEST scheduler_create_thread 00:06:50.888 ************************************ 00:06:50.889 20:26:44 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:50.889 20:26:44 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 162291 00:06:50.889 20:26:44 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 162291 ']' 00:06:50.889 20:26:44 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 162291 00:06:50.889 20:26:44 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:50.889 20:26:44 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.148 20:26:44 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 162291 00:06:51.148 20:26:44 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:51.148 20:26:44 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:51.148 20:26:44 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 162291' 00:06:51.148 killing process with pid 162291 00:06:51.148 20:26:44 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 162291 00:06:51.148 20:26:44 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 162291 00:06:51.408 [2024-12-05 20:26:44.668382] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:51.667 00:06:51.667 real 0m5.120s 00:06:51.667 user 0m10.571s 00:06:51.667 sys 0m0.393s 00:06:51.667 20:26:44 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.667 20:26:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:51.667 ************************************ 00:06:51.667 END TEST event_scheduler 00:06:51.667 ************************************ 00:06:51.668 20:26:44 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:51.668 20:26:44 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:51.668 20:26:44 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.668 20:26:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.668 20:26:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:51.668 ************************************ 00:06:51.668 START TEST app_repeat 00:06:51.668 ************************************ 00:06:51.668 20:26:44 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:51.668 20:26:44 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.668 20:26:44 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.668 20:26:44 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:51.668 20:26:44 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.668 20:26:44 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:51.668 20:26:44 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:51.668 20:26:44 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:51.668 20:26:44 event.app_repeat -- event/event.sh@19 -- # repeat_pid=163151 00:06:51.668 20:26:44 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:51.668 20:26:44 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:51.668 20:26:44 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 163151' 00:06:51.668 Process app_repeat pid: 163151 00:06:51.668 20:26:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:51.668 20:26:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:51.668 spdk_app_start Round 0 00:06:51.668 20:26:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 163151 /var/tmp/spdk-nbd.sock 00:06:51.668 20:26:44 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 163151 ']' 00:06:51.668 20:26:44 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:51.668 20:26:44 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.668 20:26:44 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:51.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:51.668 20:26:44 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.668 20:26:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.668 [2024-12-05 20:26:44.987736] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:06:51.668 [2024-12-05 20:26:44.987787] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163151 ] 00:06:51.668 [2024-12-05 20:26:45.061086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.668 [2024-12-05 20:26:45.101775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.668 [2024-12-05 20:26:45.101778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.927 20:26:45 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.927 20:26:45 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:51.927 20:26:45 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.927 Malloc0 00:06:52.186 20:26:45 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:52.186 Malloc1 00:06:52.186 20:26:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:52.186 20:26:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.186 20:26:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.186 20:26:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:52.187 20:26:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.187 20:26:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:52.187 20:26:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:52.187 20:26:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.187 20:26:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.187 20:26:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:52.187 20:26:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.187 20:26:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:52.187 20:26:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:52.187 20:26:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:52.187 20:26:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.187 20:26:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:52.447 /dev/nbd0 00:06:52.447 20:26:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:52.447 20:26:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:52.447 20:26:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:52.447 20:26:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:52.447 20:26:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:52.447 20:26:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:52.447 20:26:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:52.447 20:26:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:52.447 20:26:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:52.447 20:26:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:52.447 20:26:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.447 1+0 records in 00:06:52.447 1+0 records out 00:06:52.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227787 s, 18.0 MB/s 00:06:52.447 20:26:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.447 20:26:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:52.447 20:26:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.447 20:26:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:52.447 20:26:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:52.447 20:26:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.447 20:26:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.447 20:26:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:52.707 /dev/nbd1 00:06:52.707 20:26:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:52.707 20:26:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:52.707 20:26:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:52.707 20:26:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:52.707 20:26:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:52.707 20:26:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:52.707 20:26:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:52.707 20:26:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:52.707 20:26:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:52.707 20:26:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:52.707 20:26:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.707 1+0 records in 00:06:52.707 1+0 records out 00:06:52.707 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224511 s, 18.2 MB/s 00:06:52.707 20:26:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.707 20:26:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:52.707 20:26:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.707 20:26:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:52.707 20:26:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:52.707 20:26:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.707 20:26:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.707 20:26:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.707 20:26:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.707 20:26:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.966 20:26:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:52.966 { 00:06:52.966 "nbd_device": "/dev/nbd0", 00:06:52.966 "bdev_name": "Malloc0" 00:06:52.966 }, 00:06:52.966 { 00:06:52.966 "nbd_device": "/dev/nbd1", 00:06:52.966 "bdev_name": "Malloc1" 00:06:52.966 } 00:06:52.966 ]' 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:52.967 { 00:06:52.967 "nbd_device": "/dev/nbd0", 00:06:52.967 "bdev_name": "Malloc0" 00:06:52.967 }, 00:06:52.967 { 00:06:52.967 "nbd_device": "/dev/nbd1", 00:06:52.967 "bdev_name": "Malloc1" 00:06:52.967 } 00:06:52.967 ]' 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:52.967 /dev/nbd1' 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:52.967 /dev/nbd1' 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:52.967 256+0 records in 00:06:52.967 256+0 records out 00:06:52.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100912 s, 104 MB/s 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:52.967 256+0 records in 00:06:52.967 256+0 records out 00:06:52.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129001 s, 81.3 MB/s 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:52.967 256+0 records in 00:06:52.967 256+0 records out 00:06:52.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013709 s, 76.5 MB/s 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.967 20:26:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:53.226 20:26:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:53.226 20:26:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:53.226 20:26:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:53.226 20:26:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.226 20:26:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.226 20:26:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:53.226 20:26:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.226 20:26:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.226 20:26:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.226 20:26:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:53.486 20:26:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:53.486 20:26:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:53.486 20:26:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:53.486 20:26:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.486 20:26:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.486 20:26:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:53.486 20:26:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.486 20:26:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.486 20:26:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.486 20:26:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.486 20:26:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.746 20:26:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:53.746 20:26:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:53.746 20:26:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.746 20:26:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:53.746 20:26:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:53.746 20:26:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.746 20:26:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:53.746 20:26:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:53.746 20:26:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:53.746 20:26:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:53.746 20:26:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:53.746 20:26:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:53.746 20:26:47 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:54.006 20:26:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:54.006 [2024-12-05 20:26:47.354666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:54.006 [2024-12-05 20:26:47.388782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.006 [2024-12-05 20:26:47.388783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.006 [2024-12-05 20:26:47.428753] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:54.006 [2024-12-05 20:26:47.428788] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:57.296 20:26:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:57.296 20:26:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:57.296 spdk_app_start Round 1 00:06:57.296 20:26:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 163151 /var/tmp/spdk-nbd.sock 00:06:57.296 20:26:50 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 163151 ']' 00:06:57.296 20:26:50 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:57.296 20:26:50 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.296 20:26:50 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:57.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:57.296 20:26:50 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.296 20:26:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:57.296 20:26:50 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.296 20:26:50 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:57.296 20:26:50 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:57.296 Malloc0 00:06:57.296 20:26:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:57.555 Malloc1 00:06:57.555 20:26:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:57.555 20:26:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.555 20:26:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:57.555 20:26:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:57.555 20:26:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.555 20:26:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:57.555 20:26:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:57.555 20:26:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.555 20:26:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:57.555 20:26:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:57.555 20:26:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.555 20:26:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:57.555 20:26:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:57.555 20:26:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:57.555 20:26:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:57.555 20:26:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:57.555 /dev/nbd0 00:06:57.813 20:26:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:57.813 20:26:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:57.813 20:26:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:57.813 20:26:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:57.813 20:26:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:57.813 20:26:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:57.813 20:26:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:57.813 20:26:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:57.813 20:26:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:57.813 20:26:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:57.813 20:26:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:57.813 1+0 records in 00:06:57.813 1+0 records out 00:06:57.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229625 s, 17.8 MB/s 00:06:57.813 20:26:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:57.813 20:26:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:57.813 20:26:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:57.813 20:26:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:57.813 20:26:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:57.813 20:26:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.813 20:26:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:57.813 20:26:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:57.813 /dev/nbd1 00:06:57.813 20:26:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:57.813 20:26:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:57.813 20:26:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:57.813 20:26:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:57.813 20:26:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:57.813 20:26:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:57.813 20:26:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:57.813 20:26:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:57.813 20:26:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:57.813 20:26:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:57.813 20:26:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:58.072 1+0 records in 00:06:58.072 1+0 records out 00:06:58.072 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234121 s, 17.5 MB/s 00:06:58.072 20:26:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:58.072 20:26:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:58.072 20:26:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:58.072 20:26:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.072 20:26:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:58.072 20:26:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.072 20:26:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.072 20:26:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.072 20:26:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.072 20:26:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.072 20:26:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:58.072 { 00:06:58.072 "nbd_device": "/dev/nbd0", 00:06:58.072 "bdev_name": "Malloc0" 00:06:58.072 }, 00:06:58.072 { 00:06:58.072 "nbd_device": "/dev/nbd1", 00:06:58.073 "bdev_name": "Malloc1" 00:06:58.073 } 00:06:58.073 ]' 00:06:58.073 20:26:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:58.073 { 00:06:58.073 "nbd_device": "/dev/nbd0", 00:06:58.073 "bdev_name": "Malloc0" 00:06:58.073 }, 00:06:58.073 { 00:06:58.073 "nbd_device": "/dev/nbd1", 00:06:58.073 "bdev_name": "Malloc1" 00:06:58.073 } 00:06:58.073 ]' 00:06:58.073 20:26:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.073 20:26:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:58.073 /dev/nbd1' 00:06:58.073 20:26:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:58.073 /dev/nbd1' 00:06:58.073 20:26:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.073 20:26:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:58.073 20:26:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:58.073 20:26:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:58.073 20:26:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:58.073 20:26:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:58.073 20:26:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.073 20:26:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:58.073 20:26:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:58.073 20:26:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:58.073 20:26:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:58.073 20:26:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:58.073 256+0 records in 00:06:58.073 256+0 records out 00:06:58.073 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106197 s, 98.7 MB/s 00:06:58.073 20:26:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:58.073 20:26:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:58.333 256+0 records in 00:06:58.333 256+0 records out 00:06:58.333 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131954 s, 79.5 MB/s 00:06:58.333 20:26:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:58.333 20:26:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:58.333 256+0 records in 00:06:58.333 256+0 records out 00:06:58.333 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138587 s, 75.7 MB/s 00:06:58.333 20:26:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:58.333 20:26:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.333 20:26:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:58.333 20:26:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:58.333 20:26:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:58.333 20:26:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:58.333 20:26:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:58.333 20:26:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:58.333 20:26:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:58.333 20:26:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:58.333 20:26:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:58.333 20:26:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:58.333 20:26:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:58.333 20:26:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.333 20:26:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.333 20:26:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:58.333 20:26:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:58.333 20:26:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.333 20:26:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:58.333 20:26:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:58.333 20:26:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:58.333 20:26:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:58.333 20:26:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.333 20:26:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.333 20:26:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:58.333 20:26:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:58.333 20:26:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.333 20:26:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.333 20:26:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:58.593 20:26:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:58.593 20:26:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:58.593 20:26:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:58.593 20:26:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.593 20:26:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.593 20:26:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:58.593 20:26:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:58.593 20:26:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.593 20:26:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.593 20:26:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.593 20:26:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.852 20:26:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:58.852 20:26:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:58.852 20:26:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.852 20:26:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:58.852 20:26:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:58.852 20:26:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.852 20:26:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:58.852 20:26:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:58.852 20:26:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:58.852 20:26:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:58.852 20:26:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:58.852 20:26:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:58.852 20:26:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:59.111 20:26:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:59.370 [2024-12-05 20:26:52.559957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:59.370 [2024-12-05 20:26:52.593827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.370 [2024-12-05 20:26:52.593828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.370 [2024-12-05 20:26:52.634439] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:59.370 [2024-12-05 20:26:52.634479] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:02.670 20:26:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:02.670 20:26:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:02.670 spdk_app_start Round 2 00:07:02.670 20:26:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 163151 /var/tmp/spdk-nbd.sock 00:07:02.670 20:26:55 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 163151 ']' 00:07:02.670 20:26:55 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:02.670 20:26:55 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.670 20:26:55 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:02.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:02.670 20:26:55 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.670 20:26:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:02.670 20:26:55 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.670 20:26:55 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:02.670 20:26:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:02.670 Malloc0 00:07:02.671 20:26:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:02.671 Malloc1 00:07:02.671 20:26:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:02.671 20:26:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.671 20:26:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:02.671 20:26:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:02.671 20:26:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.671 20:26:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:02.671 20:26:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:02.671 20:26:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.671 20:26:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:02.671 20:26:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:02.671 20:26:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.671 20:26:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:02.671 20:26:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:02.671 20:26:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:02.671 20:26:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:02.671 20:26:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:02.929 /dev/nbd0 00:07:02.929 20:26:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:02.929 20:26:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:02.929 20:26:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:02.929 20:26:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:02.929 20:26:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:02.929 20:26:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:02.929 20:26:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:02.929 20:26:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:02.929 20:26:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:02.929 20:26:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:02.929 20:26:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:02.929 1+0 records in 00:07:02.929 1+0 records out 00:07:02.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000100768 s, 40.6 MB/s 00:07:02.929 20:26:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:02.929 20:26:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:02.929 20:26:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:02.929 20:26:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.929 20:26:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:02.929 20:26:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.929 20:26:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:02.929 20:26:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:03.188 /dev/nbd1 00:07:03.188 20:26:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:03.188 20:26:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:03.188 20:26:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:03.188 20:26:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:03.188 20:26:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:03.188 20:26:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:03.188 20:26:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:03.188 20:26:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:03.188 20:26:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:03.188 20:26:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:03.188 20:26:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:03.188 1+0 records in 00:07:03.188 1+0 records out 00:07:03.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252314 s, 16.2 MB/s 00:07:03.188 20:26:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:03.188 20:26:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:03.188 20:26:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:03.188 20:26:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.188 20:26:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:03.188 20:26:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.188 20:26:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:03.188 20:26:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.188 20:26:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.188 20:26:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.447 20:26:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:03.447 { 00:07:03.447 "nbd_device": "/dev/nbd0", 00:07:03.447 "bdev_name": "Malloc0" 00:07:03.447 }, 00:07:03.447 { 00:07:03.447 "nbd_device": "/dev/nbd1", 00:07:03.447 "bdev_name": "Malloc1" 00:07:03.447 } 00:07:03.447 ]' 00:07:03.447 20:26:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:03.447 { 00:07:03.447 "nbd_device": "/dev/nbd0", 00:07:03.447 "bdev_name": "Malloc0" 00:07:03.447 }, 00:07:03.447 { 00:07:03.447 "nbd_device": "/dev/nbd1", 00:07:03.447 "bdev_name": "Malloc1" 00:07:03.447 } 00:07:03.447 ]' 00:07:03.447 20:26:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.447 20:26:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:03.447 /dev/nbd1' 00:07:03.447 20:26:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:03.447 /dev/nbd1' 00:07:03.447 20:26:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.447 20:26:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:03.447 20:26:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:03.447 20:26:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:03.447 20:26:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:03.447 20:26:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:03.447 20:26:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.447 20:26:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:03.447 20:26:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:03.447 20:26:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:03.447 20:26:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:03.448 20:26:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:03.448 256+0 records in 00:07:03.448 256+0 records out 00:07:03.448 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010709 s, 97.9 MB/s 00:07:03.448 20:26:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:03.448 20:26:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:03.448 256+0 records in 00:07:03.448 256+0 records out 00:07:03.448 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132027 s, 79.4 MB/s 00:07:03.448 20:26:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:03.448 20:26:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:03.448 256+0 records in 00:07:03.448 256+0 records out 00:07:03.448 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138341 s, 75.8 MB/s 00:07:03.448 20:26:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:03.448 20:26:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.448 20:26:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:03.448 20:26:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:03.448 20:26:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:03.448 20:26:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:03.448 20:26:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:03.448 20:26:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.448 20:26:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:03.448 20:26:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.448 20:26:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:03.448 20:26:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:03.448 20:26:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:03.448 20:26:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.448 20:26:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.448 20:26:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:03.448 20:26:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:03.448 20:26:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.448 20:26:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:03.708 20:26:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:03.708 20:26:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:03.708 20:26:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:03.708 20:26:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.708 20:26:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.708 20:26:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:03.708 20:26:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:03.708 20:26:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.708 20:26:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.708 20:26:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:03.968 20:26:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:03.968 20:26:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:03.968 20:26:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:03.968 20:26:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.968 20:26:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.968 20:26:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:03.968 20:26:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:03.968 20:26:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.968 20:26:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.968 20:26:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.968 20:26:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.968 20:26:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:03.968 20:26:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:03.968 20:26:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.968 20:26:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:03.968 20:26:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:03.968 20:26:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.968 20:26:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:03.968 20:26:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:03.968 20:26:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:03.968 20:26:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:03.968 20:26:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:03.968 20:26:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:03.968 20:26:57 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:04.228 20:26:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:04.488 [2024-12-05 20:26:57.743675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:04.488 [2024-12-05 20:26:57.777833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.488 [2024-12-05 20:26:57.777833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.488 [2024-12-05 20:26:57.817904] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:04.488 [2024-12-05 20:26:57.817942] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:07.798 20:27:00 event.app_repeat -- event/event.sh@38 -- # waitforlisten 163151 /var/tmp/spdk-nbd.sock 00:07:07.798 20:27:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 163151 ']' 00:07:07.798 20:27:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:07.798 20:27:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.798 20:27:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:07.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:07.798 20:27:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.798 20:27:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:07.798 20:27:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.798 20:27:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:07.798 20:27:00 event.app_repeat -- event/event.sh@39 -- # killprocess 163151 00:07:07.798 20:27:00 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 163151 ']' 00:07:07.798 20:27:00 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 163151 00:07:07.798 20:27:00 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:07.798 20:27:00 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.798 20:27:00 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 163151 00:07:07.798 20:27:00 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.798 20:27:00 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.798 20:27:00 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 163151' 00:07:07.798 killing process with pid 163151 00:07:07.798 20:27:00 event.app_repeat -- common/autotest_common.sh@973 -- # kill 163151 00:07:07.798 20:27:00 event.app_repeat -- common/autotest_common.sh@978 -- # wait 163151 00:07:07.798 spdk_app_start is called in Round 0. 00:07:07.798 Shutdown signal received, stop current app iteration 00:07:07.798 Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 reinitialization... 00:07:07.798 spdk_app_start is called in Round 1. 00:07:07.798 Shutdown signal received, stop current app iteration 00:07:07.798 Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 reinitialization... 00:07:07.798 spdk_app_start is called in Round 2. 00:07:07.798 Shutdown signal received, stop current app iteration 00:07:07.798 Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 reinitialization... 00:07:07.798 spdk_app_start is called in Round 3. 00:07:07.798 Shutdown signal received, stop current app iteration 00:07:07.798 20:27:00 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:07.798 20:27:00 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:07.798 00:07:07.798 real 0m16.040s 00:07:07.798 user 0m35.133s 00:07:07.798 sys 0m2.409s 00:07:07.798 20:27:00 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.798 20:27:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:07.798 ************************************ 00:07:07.798 END TEST app_repeat 00:07:07.798 ************************************ 00:07:07.798 20:27:01 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:07.798 20:27:01 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:07.798 20:27:01 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.798 20:27:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.798 20:27:01 event -- common/autotest_common.sh@10 -- # set +x 00:07:07.798 ************************************ 00:07:07.798 START TEST cpu_locks 00:07:07.798 ************************************ 00:07:07.798 20:27:01 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:07.798 * Looking for test storage... 00:07:07.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:07.798 20:27:01 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:07.798 20:27:01 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:07:07.798 20:27:01 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:07.798 20:27:01 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:07.798 20:27:01 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.798 20:27:01 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.798 20:27:01 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.798 20:27:01 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.798 20:27:01 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.798 20:27:01 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.798 20:27:01 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.798 20:27:01 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.799 20:27:01 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.799 20:27:01 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.799 20:27:01 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.799 20:27:01 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:07.799 20:27:01 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:07.799 20:27:01 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.799 20:27:01 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.799 20:27:01 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:07.799 20:27:01 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:07.799 20:27:01 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.799 20:27:01 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:07.799 20:27:01 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.799 20:27:01 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:07.799 20:27:01 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:07.799 20:27:01 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.799 20:27:01 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:07.799 20:27:01 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.799 20:27:01 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.799 20:27:01 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.799 20:27:01 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:07.799 20:27:01 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.799 20:27:01 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:07.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.799 --rc genhtml_branch_coverage=1 00:07:07.799 --rc genhtml_function_coverage=1 00:07:07.799 --rc genhtml_legend=1 00:07:07.799 --rc geninfo_all_blocks=1 00:07:07.799 --rc geninfo_unexecuted_blocks=1 00:07:07.799 00:07:07.799 ' 00:07:07.799 20:27:01 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:07.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.799 --rc genhtml_branch_coverage=1 00:07:07.799 --rc genhtml_function_coverage=1 00:07:07.799 --rc genhtml_legend=1 00:07:07.799 --rc geninfo_all_blocks=1 00:07:07.799 --rc geninfo_unexecuted_blocks=1 00:07:07.799 00:07:07.799 ' 00:07:07.799 20:27:01 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:07.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.799 --rc genhtml_branch_coverage=1 00:07:07.799 --rc genhtml_function_coverage=1 00:07:07.799 --rc genhtml_legend=1 00:07:07.799 --rc geninfo_all_blocks=1 00:07:07.799 --rc geninfo_unexecuted_blocks=1 00:07:07.799 00:07:07.799 ' 00:07:07.799 20:27:01 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:07.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.799 --rc genhtml_branch_coverage=1 00:07:07.799 --rc genhtml_function_coverage=1 00:07:07.799 --rc genhtml_legend=1 00:07:07.799 --rc geninfo_all_blocks=1 00:07:07.799 --rc geninfo_unexecuted_blocks=1 00:07:07.799 00:07:07.799 ' 00:07:07.799 20:27:01 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:07.799 20:27:01 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:07.799 20:27:01 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:07.799 20:27:01 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:07.799 20:27:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.799 20:27:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.799 20:27:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.058 ************************************ 00:07:08.058 START TEST default_locks 00:07:08.058 ************************************ 00:07:08.058 20:27:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:08.058 20:27:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=166596 00:07:08.058 20:27:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 166596 00:07:08.058 20:27:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:08.058 20:27:01 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 166596 ']' 00:07:08.058 20:27:01 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.058 20:27:01 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.058 20:27:01 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.058 20:27:01 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.058 20:27:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.058 [2024-12-05 20:27:01.319274] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:07:08.058 [2024-12-05 20:27:01.319320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166596 ] 00:07:08.058 [2024-12-05 20:27:01.394525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.058 [2024-12-05 20:27:01.432635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.994 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.994 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:08.994 20:27:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 166596 00:07:08.994 20:27:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 166596 00:07:08.994 20:27:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:09.254 lslocks: write error 00:07:09.254 20:27:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 166596 00:07:09.254 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 166596 ']' 00:07:09.254 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 166596 00:07:09.254 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:09.254 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.254 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 166596 00:07:09.254 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:09.254 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:09.254 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 166596' 00:07:09.254 killing process with pid 166596 00:07:09.254 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 166596 00:07:09.254 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 166596 00:07:09.514 20:27:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 166596 00:07:09.514 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:09.514 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 166596 00:07:09.514 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:09.514 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.514 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:09.514 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.514 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 166596 00:07:09.514 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 166596 ']' 00:07:09.514 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.514 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.514 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.514 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.514 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (166596) - No such process 00:07:09.514 ERROR: process (pid: 166596) is no longer running 00:07:09.514 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.514 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:09.514 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:09.514 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:09.514 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:09.514 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:09.514 20:27:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:09.514 20:27:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:09.514 20:27:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:09.514 20:27:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:09.514 00:07:09.514 real 0m1.680s 00:07:09.514 user 0m1.753s 00:07:09.514 sys 0m0.565s 00:07:09.514 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.514 20:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.514 ************************************ 00:07:09.514 END TEST default_locks 00:07:09.514 ************************************ 00:07:09.774 20:27:02 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:09.774 20:27:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:09.774 20:27:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.774 20:27:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.774 ************************************ 00:07:09.774 START TEST default_locks_via_rpc 00:07:09.774 ************************************ 00:07:09.774 20:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:09.774 20:27:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=166942 00:07:09.774 20:27:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 166942 00:07:09.774 20:27:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:09.774 20:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 166942 ']' 00:07:09.774 20:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.774 20:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.774 20:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.774 20:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.774 20:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.774 [2024-12-05 20:27:03.063452] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:07:09.774 [2024-12-05 20:27:03.063490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166942 ] 00:07:09.774 [2024-12-05 20:27:03.135223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.774 [2024-12-05 20:27:03.174190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.035 20:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.035 20:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:10.035 20:27:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:10.035 20:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.035 20:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.035 20:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.035 20:27:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:10.035 20:27:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:10.035 20:27:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:10.035 20:27:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:10.035 20:27:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:10.035 20:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.035 20:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.035 20:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.035 20:27:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 166942 00:07:10.035 20:27:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 166942 00:07:10.035 20:27:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:10.294 20:27:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 166942 00:07:10.294 20:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 166942 ']' 00:07:10.294 20:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 166942 00:07:10.294 20:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:10.294 20:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.294 20:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 166942 00:07:10.294 20:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:10.294 20:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:10.295 20:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 166942' 00:07:10.295 killing process with pid 166942 00:07:10.295 20:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 166942 00:07:10.295 20:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 166942 00:07:10.863 00:07:10.863 real 0m1.008s 00:07:10.863 user 0m0.954s 00:07:10.863 sys 0m0.467s 00:07:10.863 20:27:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.863 20:27:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.863 ************************************ 00:07:10.863 END TEST default_locks_via_rpc 00:07:10.863 ************************************ 00:07:10.863 20:27:04 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:10.863 20:27:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.863 20:27:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.863 20:27:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.863 ************************************ 00:07:10.863 START TEST non_locking_app_on_locked_coremask 00:07:10.863 ************************************ 00:07:10.863 20:27:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:10.863 20:27:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=167228 00:07:10.863 20:27:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 167228 /var/tmp/spdk.sock 00:07:10.863 20:27:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:10.863 20:27:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 167228 ']' 00:07:10.863 20:27:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.863 20:27:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.863 20:27:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.863 20:27:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.863 20:27:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.863 [2024-12-05 20:27:04.138828] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:07:10.864 [2024-12-05 20:27:04.138865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167228 ] 00:07:10.864 [2024-12-05 20:27:04.208401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.864 [2024-12-05 20:27:04.247371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.123 20:27:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.123 20:27:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:11.123 20:27:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=167241 00:07:11.123 20:27:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 167241 /var/tmp/spdk2.sock 00:07:11.123 20:27:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:11.123 20:27:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 167241 ']' 00:07:11.123 20:27:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.123 20:27:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.123 20:27:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.123 20:27:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.123 20:27:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.123 [2024-12-05 20:27:04.515387] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:07:11.123 [2024-12-05 20:27:04.515433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167241 ] 00:07:11.383 [2024-12-05 20:27:04.596692] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:11.383 [2024-12-05 20:27:04.596716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.383 [2024-12-05 20:27:04.677393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.951 20:27:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.951 20:27:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:11.951 20:27:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 167228 00:07:11.951 20:27:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 167228 00:07:11.951 20:27:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:12.548 lslocks: write error 00:07:12.548 20:27:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 167228 00:07:12.548 20:27:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 167228 ']' 00:07:12.548 20:27:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 167228 00:07:12.548 20:27:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:12.548 20:27:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.548 20:27:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 167228 00:07:12.548 20:27:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.548 20:27:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.548 20:27:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 167228' 00:07:12.548 killing process with pid 167228 00:07:12.548 20:27:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 167228 00:07:12.548 20:27:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 167228 00:07:13.117 20:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 167241 00:07:13.117 20:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 167241 ']' 00:07:13.117 20:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 167241 00:07:13.117 20:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:13.117 20:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.117 20:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 167241 00:07:13.117 20:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.117 20:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.117 20:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 167241' 00:07:13.117 killing process with pid 167241 00:07:13.117 20:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 167241 00:07:13.117 20:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 167241 00:07:13.376 00:07:13.376 real 0m2.723s 00:07:13.376 user 0m2.847s 00:07:13.376 sys 0m0.909s 00:07:13.376 20:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.376 20:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.376 ************************************ 00:07:13.376 END TEST non_locking_app_on_locked_coremask 00:07:13.376 ************************************ 00:07:13.634 20:27:06 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:13.634 20:27:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.634 20:27:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.634 20:27:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.634 ************************************ 00:07:13.634 START TEST locking_app_on_unlocked_coremask 00:07:13.634 ************************************ 00:07:13.634 20:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:13.634 20:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=167796 00:07:13.634 20:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 167796 /var/tmp/spdk.sock 00:07:13.634 20:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:13.634 20:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 167796 ']' 00:07:13.634 20:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.634 20:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.634 20:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.634 20:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.634 20:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.634 [2024-12-05 20:27:06.933333] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:07:13.634 [2024-12-05 20:27:06.933373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167796 ] 00:07:13.634 [2024-12-05 20:27:07.006410] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:13.634 [2024-12-05 20:27:07.006434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.634 [2024-12-05 20:27:07.043793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.573 20:27:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.573 20:27:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:14.573 20:27:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=167811 00:07:14.573 20:27:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 167811 /var/tmp/spdk2.sock 00:07:14.573 20:27:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:14.573 20:27:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 167811 ']' 00:07:14.573 20:27:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:14.573 20:27:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.573 20:27:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:14.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:14.573 20:27:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.573 20:27:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.574 [2024-12-05 20:27:07.794452] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:07:14.574 [2024-12-05 20:27:07.794495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167811 ] 00:07:14.574 [2024-12-05 20:27:07.878104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.574 [2024-12-05 20:27:07.951997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.512 20:27:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.513 20:27:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:15.513 20:27:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 167811 00:07:15.513 20:27:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:15.513 20:27:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 167811 00:07:16.081 lslocks: write error 00:07:16.081 20:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 167796 00:07:16.081 20:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 167796 ']' 00:07:16.081 20:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 167796 00:07:16.081 20:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:16.081 20:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.081 20:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 167796 00:07:16.081 20:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.081 20:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.081 20:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 167796' 00:07:16.081 killing process with pid 167796 00:07:16.081 20:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 167796 00:07:16.081 20:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 167796 00:07:16.649 20:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 167811 00:07:16.649 20:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 167811 ']' 00:07:16.649 20:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 167811 00:07:16.649 20:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:16.649 20:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.649 20:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 167811 00:07:16.649 20:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.650 20:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.650 20:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 167811' 00:07:16.650 killing process with pid 167811 00:07:16.650 20:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 167811 00:07:16.650 20:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 167811 00:07:16.909 00:07:16.909 real 0m3.340s 00:07:16.909 user 0m3.606s 00:07:16.909 sys 0m0.987s 00:07:16.909 20:27:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.909 20:27:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.909 ************************************ 00:07:16.909 END TEST locking_app_on_unlocked_coremask 00:07:16.909 ************************************ 00:07:16.909 20:27:10 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:16.909 20:27:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.909 20:27:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.909 20:27:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.909 ************************************ 00:07:16.909 START TEST locking_app_on_locked_coremask 00:07:16.909 ************************************ 00:07:16.909 20:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:16.909 20:27:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=168749 00:07:16.909 20:27:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 168749 /var/tmp/spdk.sock 00:07:16.909 20:27:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:16.909 20:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 168749 ']' 00:07:16.909 20:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.909 20:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.909 20:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.909 20:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.909 20:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.909 [2024-12-05 20:27:10.343124] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:07:16.909 [2024-12-05 20:27:10.343161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168749 ] 00:07:17.169 [2024-12-05 20:27:10.413175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.169 [2024-12-05 20:27:10.452095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.737 20:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.737 20:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:17.737 20:27:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:17.737 20:27:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=169006 00:07:17.737 20:27:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 169006 /var/tmp/spdk2.sock 00:07:17.737 20:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:17.738 20:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 169006 /var/tmp/spdk2.sock 00:07:17.738 20:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:17.738 20:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.738 20:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:17.738 20:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.738 20:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 169006 /var/tmp/spdk2.sock 00:07:17.738 20:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 169006 ']' 00:07:17.738 20:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:17.738 20:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.738 20:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:17.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:17.738 20:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.738 20:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.738 [2024-12-05 20:27:11.165518] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:07:17.738 [2024-12-05 20:27:11.165562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169006 ] 00:07:17.997 [2024-12-05 20:27:11.249316] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 168749 has claimed it. 00:07:17.997 [2024-12-05 20:27:11.249351] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:18.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (169006) - No such process 00:07:18.566 ERROR: process (pid: 169006) is no longer running 00:07:18.566 20:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.566 20:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:18.566 20:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:18.566 20:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:18.566 20:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:18.566 20:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:18.566 20:27:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 168749 00:07:18.566 20:27:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 168749 00:07:18.566 20:27:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:18.827 lslocks: write error 00:07:18.827 20:27:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 168749 00:07:18.827 20:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 168749 ']' 00:07:18.827 20:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 168749 00:07:18.827 20:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:18.827 20:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.827 20:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 168749 00:07:18.827 20:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.827 20:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.827 20:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 168749' 00:07:18.827 killing process with pid 168749 00:07:18.827 20:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 168749 00:07:18.827 20:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 168749 00:07:19.396 00:07:19.396 real 0m2.250s 00:07:19.396 user 0m2.475s 00:07:19.396 sys 0m0.618s 00:07:19.396 20:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.396 20:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.396 ************************************ 00:07:19.396 END TEST locking_app_on_locked_coremask 00:07:19.396 ************************************ 00:07:19.396 20:27:12 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:19.396 20:27:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.396 20:27:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.396 20:27:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.396 ************************************ 00:07:19.396 START TEST locking_overlapped_coremask 00:07:19.396 ************************************ 00:07:19.396 20:27:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:19.396 20:27:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=169301 00:07:19.396 20:27:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 169301 /var/tmp/spdk.sock 00:07:19.396 20:27:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:19.396 20:27:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 169301 ']' 00:07:19.396 20:27:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.396 20:27:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.396 20:27:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.396 20:27:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.396 20:27:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.396 [2024-12-05 20:27:12.660808] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:07:19.396 [2024-12-05 20:27:12.660848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169301 ] 00:07:19.396 [2024-12-05 20:27:12.732361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:19.396 [2024-12-05 20:27:12.775538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.396 [2024-12-05 20:27:12.775650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.396 [2024-12-05 20:27:12.775651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.331 20:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.331 20:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:20.331 20:27:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=169322 00:07:20.331 20:27:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 169322 /var/tmp/spdk2.sock 00:07:20.331 20:27:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:20.331 20:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:20.331 20:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 169322 /var/tmp/spdk2.sock 00:07:20.331 20:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:20.331 20:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.331 20:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:20.331 20:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.331 20:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 169322 /var/tmp/spdk2.sock 00:07:20.331 20:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 169322 ']' 00:07:20.331 20:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:20.331 20:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.331 20:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:20.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:20.331 20:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.331 20:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.331 [2024-12-05 20:27:13.522907] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:07:20.331 [2024-12-05 20:27:13.522949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169322 ] 00:07:20.331 [2024-12-05 20:27:13.606096] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 169301 has claimed it. 00:07:20.331 [2024-12-05 20:27:13.606128] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:20.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (169322) - No such process 00:07:20.899 ERROR: process (pid: 169322) is no longer running 00:07:20.899 20:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.899 20:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:20.899 20:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:20.899 20:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:20.899 20:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:20.899 20:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:20.899 20:27:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:20.900 20:27:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:20.900 20:27:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:20.900 20:27:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:20.900 20:27:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 169301 00:07:20.900 20:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 169301 ']' 00:07:20.900 20:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 169301 00:07:20.900 20:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:20.900 20:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.900 20:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 169301 00:07:20.900 20:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.900 20:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.900 20:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 169301' 00:07:20.900 killing process with pid 169301 00:07:20.900 20:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 169301 00:07:20.900 20:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 169301 00:07:21.159 00:07:21.159 real 0m1.893s 00:07:21.159 user 0m5.476s 00:07:21.159 sys 0m0.401s 00:07:21.159 20:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.159 20:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.159 ************************************ 00:07:21.159 END TEST locking_overlapped_coremask 00:07:21.159 ************************************ 00:07:21.159 20:27:14 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:21.159 20:27:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.159 20:27:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.159 20:27:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:21.159 ************************************ 00:07:21.159 START TEST locking_overlapped_coremask_via_rpc 00:07:21.159 ************************************ 00:07:21.159 20:27:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:21.159 20:27:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=169610 00:07:21.159 20:27:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 169610 /var/tmp/spdk.sock 00:07:21.159 20:27:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:21.159 20:27:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 169610 ']' 00:07:21.159 20:27:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.159 20:27:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.159 20:27:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.159 20:27:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.159 20:27:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.418 [2024-12-05 20:27:14.624548] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:07:21.418 [2024-12-05 20:27:14.624586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169610 ] 00:07:21.418 [2024-12-05 20:27:14.697607] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:21.418 [2024-12-05 20:27:14.697632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:21.418 [2024-12-05 20:27:14.733907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.418 [2024-12-05 20:27:14.734021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.418 [2024-12-05 20:27:14.734022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.353 20:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.353 20:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:22.353 20:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=169875 00:07:22.353 20:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 169875 /var/tmp/spdk2.sock 00:07:22.353 20:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:22.353 20:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 169875 ']' 00:07:22.353 20:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:22.353 20:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.353 20:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:22.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:22.353 20:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.353 20:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.353 [2024-12-05 20:27:15.489925] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:07:22.353 [2024-12-05 20:27:15.489969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169875 ] 00:07:22.353 [2024-12-05 20:27:15.571115] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:22.353 [2024-12-05 20:27:15.571145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:22.353 [2024-12-05 20:27:15.650961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.353 [2024-12-05 20:27:15.654101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.353 [2024-12-05 20:27:15.654102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:22.921 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.921 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:22.921 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:22.921 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.921 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.921 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.921 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:22.921 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:22.921 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:22.921 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:22.921 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.921 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:22.921 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.921 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:22.921 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.921 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.921 [2024-12-05 20:27:16.306122] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 169610 has claimed it. 00:07:22.921 request: 00:07:22.921 { 00:07:22.921 "method": "framework_enable_cpumask_locks", 00:07:22.921 "req_id": 1 00:07:22.921 } 00:07:22.921 Got JSON-RPC error response 00:07:22.921 response: 00:07:22.921 { 00:07:22.921 "code": -32603, 00:07:22.921 "message": "Failed to claim CPU core: 2" 00:07:22.921 } 00:07:22.921 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:22.921 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:22.921 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:22.921 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:22.921 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:22.921 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 169610 /var/tmp/spdk.sock 00:07:22.921 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 169610 ']' 00:07:22.921 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.921 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.921 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.921 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.921 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.178 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.178 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:23.178 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 169875 /var/tmp/spdk2.sock 00:07:23.178 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 169875 ']' 00:07:23.178 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:23.178 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.178 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:23.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:23.178 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.178 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.436 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.436 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:23.436 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:23.436 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:23.436 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:23.436 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:23.436 00:07:23.436 real 0m2.133s 00:07:23.436 user 0m0.895s 00:07:23.436 sys 0m0.167s 00:07:23.436 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.436 20:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.436 ************************************ 00:07:23.436 END TEST locking_overlapped_coremask_via_rpc 00:07:23.436 ************************************ 00:07:23.436 20:27:16 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:23.436 20:27:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 169610 ]] 00:07:23.436 20:27:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 169610 00:07:23.436 20:27:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 169610 ']' 00:07:23.436 20:27:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 169610 00:07:23.436 20:27:16 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:23.436 20:27:16 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.436 20:27:16 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 169610 00:07:23.436 20:27:16 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.436 20:27:16 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.436 20:27:16 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 169610' 00:07:23.436 killing process with pid 169610 00:07:23.436 20:27:16 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 169610 00:07:23.436 20:27:16 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 169610 00:07:23.695 20:27:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 169875 ]] 00:07:23.695 20:27:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 169875 00:07:23.695 20:27:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 169875 ']' 00:07:23.695 20:27:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 169875 00:07:23.695 20:27:17 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:23.695 20:27:17 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.695 20:27:17 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 169875 00:07:23.955 20:27:17 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:23.955 20:27:17 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:23.955 20:27:17 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 169875' 00:07:23.955 killing process with pid 169875 00:07:23.955 20:27:17 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 169875 00:07:23.955 20:27:17 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 169875 00:07:24.215 20:27:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:24.215 20:27:17 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:24.215 20:27:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 169610 ]] 00:07:24.215 20:27:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 169610 00:07:24.215 20:27:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 169610 ']' 00:07:24.215 20:27:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 169610 00:07:24.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (169610) - No such process 00:07:24.215 20:27:17 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 169610 is not found' 00:07:24.215 Process with pid 169610 is not found 00:07:24.215 20:27:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 169875 ]] 00:07:24.215 20:27:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 169875 00:07:24.215 20:27:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 169875 ']' 00:07:24.215 20:27:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 169875 00:07:24.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (169875) - No such process 00:07:24.215 20:27:17 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 169875 is not found' 00:07:24.215 Process with pid 169875 is not found 00:07:24.215 20:27:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:24.215 00:07:24.215 real 0m16.410s 00:07:24.215 user 0m28.962s 00:07:24.215 sys 0m5.072s 00:07:24.215 20:27:17 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.215 20:27:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.215 ************************************ 00:07:24.215 END TEST cpu_locks 00:07:24.215 ************************************ 00:07:24.215 00:07:24.215 real 0m41.697s 00:07:24.215 user 1m21.223s 00:07:24.215 sys 0m8.469s 00:07:24.215 20:27:17 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.215 20:27:17 event -- common/autotest_common.sh@10 -- # set +x 00:07:24.215 ************************************ 00:07:24.215 END TEST event 00:07:24.215 ************************************ 00:07:24.215 20:27:17 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:24.215 20:27:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:24.215 20:27:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.215 20:27:17 -- common/autotest_common.sh@10 -- # set +x 00:07:24.215 ************************************ 00:07:24.215 START TEST thread 00:07:24.215 ************************************ 00:07:24.215 20:27:17 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:24.215 * Looking for test storage... 00:07:24.476 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:24.476 20:27:17 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:24.476 20:27:17 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:07:24.476 20:27:17 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:24.476 20:27:17 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:24.476 20:27:17 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:24.476 20:27:17 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:24.476 20:27:17 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:24.476 20:27:17 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:24.476 20:27:17 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:24.476 20:27:17 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:24.476 20:27:17 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:24.476 20:27:17 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:24.476 20:27:17 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:24.476 20:27:17 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:24.476 20:27:17 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:24.476 20:27:17 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:24.476 20:27:17 thread -- scripts/common.sh@345 -- # : 1 00:07:24.476 20:27:17 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:24.476 20:27:17 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:24.476 20:27:17 thread -- scripts/common.sh@365 -- # decimal 1 00:07:24.476 20:27:17 thread -- scripts/common.sh@353 -- # local d=1 00:07:24.476 20:27:17 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:24.476 20:27:17 thread -- scripts/common.sh@355 -- # echo 1 00:07:24.476 20:27:17 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:24.476 20:27:17 thread -- scripts/common.sh@366 -- # decimal 2 00:07:24.476 20:27:17 thread -- scripts/common.sh@353 -- # local d=2 00:07:24.476 20:27:17 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:24.476 20:27:17 thread -- scripts/common.sh@355 -- # echo 2 00:07:24.476 20:27:17 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:24.476 20:27:17 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:24.476 20:27:17 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:24.476 20:27:17 thread -- scripts/common.sh@368 -- # return 0 00:07:24.476 20:27:17 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:24.476 20:27:17 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:24.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.476 --rc genhtml_branch_coverage=1 00:07:24.476 --rc genhtml_function_coverage=1 00:07:24.476 --rc genhtml_legend=1 00:07:24.476 --rc geninfo_all_blocks=1 00:07:24.476 --rc geninfo_unexecuted_blocks=1 00:07:24.476 00:07:24.476 ' 00:07:24.476 20:27:17 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:24.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.476 --rc genhtml_branch_coverage=1 00:07:24.476 --rc genhtml_function_coverage=1 00:07:24.476 --rc genhtml_legend=1 00:07:24.476 --rc geninfo_all_blocks=1 00:07:24.476 --rc geninfo_unexecuted_blocks=1 00:07:24.476 00:07:24.476 ' 00:07:24.476 20:27:17 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:24.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.476 --rc genhtml_branch_coverage=1 00:07:24.476 --rc genhtml_function_coverage=1 00:07:24.476 --rc genhtml_legend=1 00:07:24.476 --rc geninfo_all_blocks=1 00:07:24.476 --rc geninfo_unexecuted_blocks=1 00:07:24.476 00:07:24.476 ' 00:07:24.476 20:27:17 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:24.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.476 --rc genhtml_branch_coverage=1 00:07:24.476 --rc genhtml_function_coverage=1 00:07:24.476 --rc genhtml_legend=1 00:07:24.476 --rc geninfo_all_blocks=1 00:07:24.476 --rc geninfo_unexecuted_blocks=1 00:07:24.476 00:07:24.476 ' 00:07:24.476 20:27:17 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:24.476 20:27:17 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:24.476 20:27:17 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.476 20:27:17 thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.476 ************************************ 00:07:24.476 START TEST thread_poller_perf 00:07:24.476 ************************************ 00:07:24.476 20:27:17 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:24.476 [2024-12-05 20:27:17.791593] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:07:24.476 [2024-12-05 20:27:17.791661] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170259 ] 00:07:24.476 [2024-12-05 20:27:17.870190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.476 [2024-12-05 20:27:17.907431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.476 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:25.857 [2024-12-05T19:27:19.298Z] ====================================== 00:07:25.857 [2024-12-05T19:27:19.298Z] busy:2207775650 (cyc) 00:07:25.857 [2024-12-05T19:27:19.298Z] total_run_count: 460000 00:07:25.857 [2024-12-05T19:27:19.298Z] tsc_hz: 2200000000 (cyc) 00:07:25.857 [2024-12-05T19:27:19.298Z] ====================================== 00:07:25.857 [2024-12-05T19:27:19.298Z] poller_cost: 4799 (cyc), 2181 (nsec) 00:07:25.857 00:07:25.857 real 0m1.176s 00:07:25.857 user 0m1.094s 00:07:25.857 sys 0m0.078s 00:07:25.857 20:27:18 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.857 20:27:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:25.857 ************************************ 00:07:25.857 END TEST thread_poller_perf 00:07:25.857 ************************************ 00:07:25.857 20:27:18 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:25.857 20:27:18 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:25.857 20:27:18 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.857 20:27:18 thread -- common/autotest_common.sh@10 -- # set +x 00:07:25.857 ************************************ 00:07:25.857 START TEST thread_poller_perf 00:07:25.857 ************************************ 00:07:25.857 20:27:19 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:25.857 [2024-12-05 20:27:19.037199] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:07:25.857 [2024-12-05 20:27:19.037266] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170541 ] 00:07:25.857 [2024-12-05 20:27:19.117908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.857 [2024-12-05 20:27:19.155426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.857 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:26.795 [2024-12-05T19:27:20.236Z] ====================================== 00:07:26.795 [2024-12-05T19:27:20.236Z] busy:2201606698 (cyc) 00:07:26.795 [2024-12-05T19:27:20.236Z] total_run_count: 5505000 00:07:26.795 [2024-12-05T19:27:20.236Z] tsc_hz: 2200000000 (cyc) 00:07:26.795 [2024-12-05T19:27:20.236Z] ====================================== 00:07:26.795 [2024-12-05T19:27:20.236Z] poller_cost: 399 (cyc), 181 (nsec) 00:07:26.795 00:07:26.795 real 0m1.181s 00:07:26.795 user 0m1.105s 00:07:26.795 sys 0m0.073s 00:07:26.795 20:27:20 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.795 20:27:20 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:26.796 ************************************ 00:07:26.796 END TEST thread_poller_perf 00:07:26.796 ************************************ 00:07:26.796 20:27:20 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:26.796 00:07:26.796 real 0m2.657s 00:07:26.796 user 0m2.356s 00:07:26.796 sys 0m0.316s 00:07:26.796 20:27:20 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.796 20:27:20 thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.796 ************************************ 00:07:26.796 END TEST thread 00:07:26.796 ************************************ 00:07:27.056 20:27:20 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:27.056 20:27:20 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:27.056 20:27:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.056 20:27:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.056 20:27:20 -- common/autotest_common.sh@10 -- # set +x 00:07:27.056 ************************************ 00:07:27.056 START TEST app_cmdline 00:07:27.056 ************************************ 00:07:27.056 20:27:20 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:27.056 * Looking for test storage... 00:07:27.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:27.056 20:27:20 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:27.056 20:27:20 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:07:27.056 20:27:20 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:27.056 20:27:20 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:27.056 20:27:20 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.056 20:27:20 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.056 20:27:20 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.056 20:27:20 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.056 20:27:20 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.056 20:27:20 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.056 20:27:20 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.056 20:27:20 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.056 20:27:20 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.056 20:27:20 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.056 20:27:20 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.056 20:27:20 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:27.056 20:27:20 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:27.056 20:27:20 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.056 20:27:20 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.056 20:27:20 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:27.056 20:27:20 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:27.056 20:27:20 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.056 20:27:20 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:27.056 20:27:20 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.056 20:27:20 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:27.056 20:27:20 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:27.056 20:27:20 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.056 20:27:20 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:27.056 20:27:20 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.056 20:27:20 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.056 20:27:20 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.056 20:27:20 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:27.056 20:27:20 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.056 20:27:20 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:27.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.056 --rc genhtml_branch_coverage=1 00:07:27.056 --rc genhtml_function_coverage=1 00:07:27.056 --rc genhtml_legend=1 00:07:27.056 --rc geninfo_all_blocks=1 00:07:27.056 --rc geninfo_unexecuted_blocks=1 00:07:27.056 00:07:27.056 ' 00:07:27.056 20:27:20 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:27.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.056 --rc genhtml_branch_coverage=1 00:07:27.056 --rc genhtml_function_coverage=1 00:07:27.056 --rc genhtml_legend=1 00:07:27.056 --rc geninfo_all_blocks=1 00:07:27.056 --rc geninfo_unexecuted_blocks=1 00:07:27.056 00:07:27.056 ' 00:07:27.056 20:27:20 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:27.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.056 --rc genhtml_branch_coverage=1 00:07:27.056 --rc genhtml_function_coverage=1 00:07:27.056 --rc genhtml_legend=1 00:07:27.056 --rc geninfo_all_blocks=1 00:07:27.056 --rc geninfo_unexecuted_blocks=1 00:07:27.056 00:07:27.056 ' 00:07:27.056 20:27:20 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:27.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.056 --rc genhtml_branch_coverage=1 00:07:27.056 --rc genhtml_function_coverage=1 00:07:27.056 --rc genhtml_legend=1 00:07:27.056 --rc geninfo_all_blocks=1 00:07:27.056 --rc geninfo_unexecuted_blocks=1 00:07:27.056 00:07:27.056 ' 00:07:27.056 20:27:20 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:27.056 20:27:20 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=170866 00:07:27.056 20:27:20 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 170866 00:07:27.056 20:27:20 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:27.056 20:27:20 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 170866 ']' 00:07:27.056 20:27:20 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.056 20:27:20 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.056 20:27:20 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.056 20:27:20 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.056 20:27:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:27.316 [2024-12-05 20:27:20.532023] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:07:27.316 [2024-12-05 20:27:20.532073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170866 ] 00:07:27.316 [2024-12-05 20:27:20.601715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.316 [2024-12-05 20:27:20.640927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.256 20:27:21 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.256 20:27:21 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:28.256 20:27:21 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:28.256 { 00:07:28.256 "version": "SPDK v25.01-pre git sha1 98eca6fa0", 00:07:28.256 "fields": { 00:07:28.256 "major": 25, 00:07:28.256 "minor": 1, 00:07:28.256 "patch": 0, 00:07:28.256 "suffix": "-pre", 00:07:28.256 "commit": "98eca6fa0" 00:07:28.256 } 00:07:28.256 } 00:07:28.256 20:27:21 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:28.256 20:27:21 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:28.256 20:27:21 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:28.256 20:27:21 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:28.256 20:27:21 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:28.256 20:27:21 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.256 20:27:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:28.256 20:27:21 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:28.256 20:27:21 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:28.256 20:27:21 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.256 20:27:21 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:28.256 20:27:21 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:28.256 20:27:21 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.256 20:27:21 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:28.256 20:27:21 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.256 20:27:21 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.256 20:27:21 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.256 20:27:21 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.256 20:27:21 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.256 20:27:21 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.256 20:27:21 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.256 20:27:21 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.256 20:27:21 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:28.256 20:27:21 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.515 request: 00:07:28.515 { 00:07:28.515 "method": "env_dpdk_get_mem_stats", 00:07:28.515 "req_id": 1 00:07:28.515 } 00:07:28.515 Got JSON-RPC error response 00:07:28.515 response: 00:07:28.515 { 00:07:28.515 "code": -32601, 00:07:28.515 "message": "Method not found" 00:07:28.515 } 00:07:28.515 20:27:21 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:28.515 20:27:21 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:28.515 20:27:21 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:28.515 20:27:21 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:28.515 20:27:21 app_cmdline -- app/cmdline.sh@1 -- # killprocess 170866 00:07:28.515 20:27:21 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 170866 ']' 00:07:28.515 20:27:21 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 170866 00:07:28.515 20:27:21 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:28.515 20:27:21 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.515 20:27:21 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 170866 00:07:28.515 20:27:21 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.515 20:27:21 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.515 20:27:21 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 170866' 00:07:28.515 killing process with pid 170866 00:07:28.515 20:27:21 app_cmdline -- common/autotest_common.sh@973 -- # kill 170866 00:07:28.515 20:27:21 app_cmdline -- common/autotest_common.sh@978 -- # wait 170866 00:07:28.786 00:07:28.786 real 0m1.780s 00:07:28.786 user 0m2.091s 00:07:28.786 sys 0m0.476s 00:07:28.786 20:27:22 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.786 20:27:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:28.786 ************************************ 00:07:28.786 END TEST app_cmdline 00:07:28.786 ************************************ 00:07:28.786 20:27:22 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:28.786 20:27:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.786 20:27:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.786 20:27:22 -- common/autotest_common.sh@10 -- # set +x 00:07:28.786 ************************************ 00:07:28.786 START TEST version 00:07:28.786 ************************************ 00:07:28.786 20:27:22 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:29.048 * Looking for test storage... 00:07:29.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:29.048 20:27:22 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:29.048 20:27:22 version -- common/autotest_common.sh@1711 -- # lcov --version 00:07:29.048 20:27:22 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:29.048 20:27:22 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:29.048 20:27:22 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.048 20:27:22 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.048 20:27:22 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.048 20:27:22 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.048 20:27:22 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.048 20:27:22 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.048 20:27:22 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.048 20:27:22 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.048 20:27:22 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.048 20:27:22 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.048 20:27:22 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.048 20:27:22 version -- scripts/common.sh@344 -- # case "$op" in 00:07:29.048 20:27:22 version -- scripts/common.sh@345 -- # : 1 00:07:29.048 20:27:22 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.048 20:27:22 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.048 20:27:22 version -- scripts/common.sh@365 -- # decimal 1 00:07:29.048 20:27:22 version -- scripts/common.sh@353 -- # local d=1 00:07:29.048 20:27:22 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.048 20:27:22 version -- scripts/common.sh@355 -- # echo 1 00:07:29.048 20:27:22 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.048 20:27:22 version -- scripts/common.sh@366 -- # decimal 2 00:07:29.048 20:27:22 version -- scripts/common.sh@353 -- # local d=2 00:07:29.048 20:27:22 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.048 20:27:22 version -- scripts/common.sh@355 -- # echo 2 00:07:29.048 20:27:22 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.048 20:27:22 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.048 20:27:22 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.048 20:27:22 version -- scripts/common.sh@368 -- # return 0 00:07:29.048 20:27:22 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.048 20:27:22 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:29.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.048 --rc genhtml_branch_coverage=1 00:07:29.048 --rc genhtml_function_coverage=1 00:07:29.048 --rc genhtml_legend=1 00:07:29.048 --rc geninfo_all_blocks=1 00:07:29.048 --rc geninfo_unexecuted_blocks=1 00:07:29.048 00:07:29.048 ' 00:07:29.048 20:27:22 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:29.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.048 --rc genhtml_branch_coverage=1 00:07:29.048 --rc genhtml_function_coverage=1 00:07:29.048 --rc genhtml_legend=1 00:07:29.048 --rc geninfo_all_blocks=1 00:07:29.048 --rc geninfo_unexecuted_blocks=1 00:07:29.048 00:07:29.048 ' 00:07:29.048 20:27:22 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:29.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.048 --rc genhtml_branch_coverage=1 00:07:29.048 --rc genhtml_function_coverage=1 00:07:29.048 --rc genhtml_legend=1 00:07:29.048 --rc geninfo_all_blocks=1 00:07:29.048 --rc geninfo_unexecuted_blocks=1 00:07:29.048 00:07:29.048 ' 00:07:29.048 20:27:22 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:29.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.048 --rc genhtml_branch_coverage=1 00:07:29.048 --rc genhtml_function_coverage=1 00:07:29.048 --rc genhtml_legend=1 00:07:29.048 --rc geninfo_all_blocks=1 00:07:29.048 --rc geninfo_unexecuted_blocks=1 00:07:29.048 00:07:29.048 ' 00:07:29.048 20:27:22 version -- app/version.sh@17 -- # get_header_version major 00:07:29.048 20:27:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:29.048 20:27:22 version -- app/version.sh@14 -- # cut -f2 00:07:29.048 20:27:22 version -- app/version.sh@14 -- # tr -d '"' 00:07:29.048 20:27:22 version -- app/version.sh@17 -- # major=25 00:07:29.048 20:27:22 version -- app/version.sh@18 -- # get_header_version minor 00:07:29.048 20:27:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:29.048 20:27:22 version -- app/version.sh@14 -- # cut -f2 00:07:29.048 20:27:22 version -- app/version.sh@14 -- # tr -d '"' 00:07:29.048 20:27:22 version -- app/version.sh@18 -- # minor=1 00:07:29.048 20:27:22 version -- app/version.sh@19 -- # get_header_version patch 00:07:29.048 20:27:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:29.048 20:27:22 version -- app/version.sh@14 -- # cut -f2 00:07:29.048 20:27:22 version -- app/version.sh@14 -- # tr -d '"' 00:07:29.048 20:27:22 version -- app/version.sh@19 -- # patch=0 00:07:29.048 20:27:22 version -- app/version.sh@20 -- # get_header_version suffix 00:07:29.048 20:27:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:29.048 20:27:22 version -- app/version.sh@14 -- # cut -f2 00:07:29.048 20:27:22 version -- app/version.sh@14 -- # tr -d '"' 00:07:29.048 20:27:22 version -- app/version.sh@20 -- # suffix=-pre 00:07:29.048 20:27:22 version -- app/version.sh@22 -- # version=25.1 00:07:29.048 20:27:22 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:29.048 20:27:22 version -- app/version.sh@28 -- # version=25.1rc0 00:07:29.048 20:27:22 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:29.048 20:27:22 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:29.048 20:27:22 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:29.048 20:27:22 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:29.048 00:07:29.048 real 0m0.243s 00:07:29.048 user 0m0.150s 00:07:29.048 sys 0m0.136s 00:07:29.048 20:27:22 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.048 20:27:22 version -- common/autotest_common.sh@10 -- # set +x 00:07:29.048 ************************************ 00:07:29.048 END TEST version 00:07:29.048 ************************************ 00:07:29.048 20:27:22 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:29.048 20:27:22 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:29.048 20:27:22 -- spdk/autotest.sh@194 -- # uname -s 00:07:29.049 20:27:22 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:29.049 20:27:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:29.049 20:27:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:29.049 20:27:22 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:29.049 20:27:22 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:29.049 20:27:22 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:29.049 20:27:22 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:29.049 20:27:22 -- common/autotest_common.sh@10 -- # set +x 00:07:29.049 20:27:22 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:29.049 20:27:22 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:29.049 20:27:22 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:29.049 20:27:22 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:29.049 20:27:22 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:29.049 20:27:22 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:29.049 20:27:22 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:29.049 20:27:22 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:29.049 20:27:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.049 20:27:22 -- common/autotest_common.sh@10 -- # set +x 00:07:29.308 ************************************ 00:07:29.308 START TEST nvmf_tcp 00:07:29.308 ************************************ 00:07:29.308 20:27:22 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:29.308 * Looking for test storage... 00:07:29.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:29.309 20:27:22 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:29.309 20:27:22 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:07:29.309 20:27:22 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:29.309 20:27:22 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:29.309 20:27:22 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.309 20:27:22 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.309 20:27:22 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.309 20:27:22 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.309 20:27:22 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.309 20:27:22 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.309 20:27:22 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.309 20:27:22 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.309 20:27:22 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.309 20:27:22 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.309 20:27:22 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.309 20:27:22 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:29.309 20:27:22 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:29.309 20:27:22 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.309 20:27:22 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.309 20:27:22 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:29.309 20:27:22 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:29.309 20:27:22 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.309 20:27:22 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:29.309 20:27:22 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.309 20:27:22 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:29.309 20:27:22 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:29.309 20:27:22 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.309 20:27:22 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:29.309 20:27:22 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.309 20:27:22 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.309 20:27:22 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.309 20:27:22 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:29.309 20:27:22 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.309 20:27:22 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:29.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.309 --rc genhtml_branch_coverage=1 00:07:29.309 --rc genhtml_function_coverage=1 00:07:29.309 --rc genhtml_legend=1 00:07:29.309 --rc geninfo_all_blocks=1 00:07:29.309 --rc geninfo_unexecuted_blocks=1 00:07:29.309 00:07:29.309 ' 00:07:29.309 20:27:22 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:29.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.309 --rc genhtml_branch_coverage=1 00:07:29.309 --rc genhtml_function_coverage=1 00:07:29.309 --rc genhtml_legend=1 00:07:29.309 --rc geninfo_all_blocks=1 00:07:29.309 --rc geninfo_unexecuted_blocks=1 00:07:29.309 00:07:29.309 ' 00:07:29.309 20:27:22 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:29.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.309 --rc genhtml_branch_coverage=1 00:07:29.309 --rc genhtml_function_coverage=1 00:07:29.309 --rc genhtml_legend=1 00:07:29.309 --rc geninfo_all_blocks=1 00:07:29.309 --rc geninfo_unexecuted_blocks=1 00:07:29.309 00:07:29.309 ' 00:07:29.309 20:27:22 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:29.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.309 --rc genhtml_branch_coverage=1 00:07:29.309 --rc genhtml_function_coverage=1 00:07:29.309 --rc genhtml_legend=1 00:07:29.309 --rc geninfo_all_blocks=1 00:07:29.309 --rc geninfo_unexecuted_blocks=1 00:07:29.309 00:07:29.309 ' 00:07:29.309 20:27:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:29.309 20:27:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:29.309 20:27:22 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:29.309 20:27:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:29.309 20:27:22 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.309 20:27:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:29.309 ************************************ 00:07:29.309 START TEST nvmf_target_core 00:07:29.309 ************************************ 00:07:29.309 20:27:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:29.570 * Looking for test storage... 00:07:29.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:29.570 20:27:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:29.570 20:27:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:07:29.570 20:27:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:29.570 20:27:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:29.570 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.570 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.570 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.570 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.570 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.570 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.570 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.570 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.570 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.570 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.570 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.570 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:29.570 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:29.570 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.570 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.570 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:29.570 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:29.570 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.570 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:29.570 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.570 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:29.570 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:29.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.571 --rc genhtml_branch_coverage=1 00:07:29.571 --rc genhtml_function_coverage=1 00:07:29.571 --rc genhtml_legend=1 00:07:29.571 --rc geninfo_all_blocks=1 00:07:29.571 --rc geninfo_unexecuted_blocks=1 00:07:29.571 00:07:29.571 ' 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:29.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.571 --rc genhtml_branch_coverage=1 00:07:29.571 --rc genhtml_function_coverage=1 00:07:29.571 --rc genhtml_legend=1 00:07:29.571 --rc geninfo_all_blocks=1 00:07:29.571 --rc geninfo_unexecuted_blocks=1 00:07:29.571 00:07:29.571 ' 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:29.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.571 --rc genhtml_branch_coverage=1 00:07:29.571 --rc genhtml_function_coverage=1 00:07:29.571 --rc genhtml_legend=1 00:07:29.571 --rc geninfo_all_blocks=1 00:07:29.571 --rc geninfo_unexecuted_blocks=1 00:07:29.571 00:07:29.571 ' 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:29.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.571 --rc genhtml_branch_coverage=1 00:07:29.571 --rc genhtml_function_coverage=1 00:07:29.571 --rc genhtml_legend=1 00:07:29.571 --rc geninfo_all_blocks=1 00:07:29.571 --rc geninfo_unexecuted_blocks=1 00:07:29.571 00:07:29.571 ' 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:29.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:29.571 ************************************ 00:07:29.571 START TEST nvmf_abort 00:07:29.571 ************************************ 00:07:29.571 20:27:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:29.832 * Looking for test storage... 00:07:29.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.832 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:29.832 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:07:29.832 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:29.832 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:29.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.833 --rc genhtml_branch_coverage=1 00:07:29.833 --rc genhtml_function_coverage=1 00:07:29.833 --rc genhtml_legend=1 00:07:29.833 --rc geninfo_all_blocks=1 00:07:29.833 --rc geninfo_unexecuted_blocks=1 00:07:29.833 00:07:29.833 ' 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:29.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.833 --rc genhtml_branch_coverage=1 00:07:29.833 --rc genhtml_function_coverage=1 00:07:29.833 --rc genhtml_legend=1 00:07:29.833 --rc geninfo_all_blocks=1 00:07:29.833 --rc geninfo_unexecuted_blocks=1 00:07:29.833 00:07:29.833 ' 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:29.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.833 --rc genhtml_branch_coverage=1 00:07:29.833 --rc genhtml_function_coverage=1 00:07:29.833 --rc genhtml_legend=1 00:07:29.833 --rc geninfo_all_blocks=1 00:07:29.833 --rc geninfo_unexecuted_blocks=1 00:07:29.833 00:07:29.833 ' 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:29.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.833 --rc genhtml_branch_coverage=1 00:07:29.833 --rc genhtml_function_coverage=1 00:07:29.833 --rc genhtml_legend=1 00:07:29.833 --rc geninfo_all_blocks=1 00:07:29.833 --rc geninfo_unexecuted_blocks=1 00:07:29.833 00:07:29.833 ' 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:29.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.833 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.834 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.834 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:29.834 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:29.834 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:29.834 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:36.435 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:36.435 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:36.435 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:36.435 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:36.435 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:36.435 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:36.435 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:36.435 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:36.435 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:36.435 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:36.435 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:36.435 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:36.435 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:36.435 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:36.435 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:36.435 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:36.435 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:36.435 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:36.436 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:36.436 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:36.436 Found net devices under 0000:af:00.0: cvl_0_0 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:36.436 Found net devices under 0000:af:00.1: cvl_0_1 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:36.436 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:36.436 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:36.436 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:36.436 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:36.436 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:36.436 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:36.436 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:36.436 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:36.436 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:36.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:36.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:07:36.436 00:07:36.436 --- 10.0.0.2 ping statistics --- 00:07:36.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.436 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:07:36.436 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:36.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:36.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:07:36.436 00:07:36.436 --- 10.0.0.1 ping statistics --- 00:07:36.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.436 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:07:36.436 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:36.436 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:36.436 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:36.436 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:36.436 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:36.436 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:36.436 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:36.436 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:36.436 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:36.436 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:36.436 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:36.436 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:36.436 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:36.436 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=174758 00:07:36.436 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 174758 00:07:36.436 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 174758 ']' 00:07:36.436 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.436 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.436 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.436 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:36.437 [2024-12-05 20:27:29.280577] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:07:36.437 [2024-12-05 20:27:29.280615] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.437 [2024-12-05 20:27:29.356446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:36.437 [2024-12-05 20:27:29.396787] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.437 [2024-12-05 20:27:29.396822] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.437 [2024-12-05 20:27:29.396829] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.437 [2024-12-05 20:27:29.396835] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.437 [2024-12-05 20:27:29.396839] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.437 [2024-12-05 20:27:29.398229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.437 [2024-12-05 20:27:29.398340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.437 [2024-12-05 20:27:29.398342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:36.437 [2024-12-05 20:27:29.530536] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:36.437 Malloc0 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:36.437 Delay0 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:36.437 [2024-12-05 20:27:29.608217] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.437 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:36.437 [2024-12-05 20:27:29.744940] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:38.977 Initializing NVMe Controllers 00:07:38.977 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:38.977 controller IO queue size 128 less than required 00:07:38.977 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:38.978 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:38.978 Initialization complete. Launching workers. 00:07:38.978 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 41363 00:07:38.978 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 41424, failed to submit 62 00:07:38.978 success 41367, unsuccessful 57, failed 0 00:07:38.978 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:38.978 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.978 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:38.978 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.978 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:38.978 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:38.978 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:38.978 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:38.978 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:38.978 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:38.978 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:38.978 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:38.978 rmmod nvme_tcp 00:07:38.978 rmmod nvme_fabrics 00:07:38.978 rmmod nvme_keyring 00:07:38.978 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:38.978 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:38.978 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:38.978 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 174758 ']' 00:07:38.978 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 174758 00:07:38.978 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 174758 ']' 00:07:38.978 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 174758 00:07:38.978 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:38.978 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.978 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 174758 00:07:38.978 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:38.978 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:38.978 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 174758' 00:07:38.978 killing process with pid 174758 00:07:38.978 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 174758 00:07:38.978 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 174758 00:07:38.978 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:38.978 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:38.978 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:38.978 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:38.978 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:38.978 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:38.978 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:38.978 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:38.978 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:38.978 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.978 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:38.978 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.888 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:40.888 00:07:40.888 real 0m11.233s 00:07:40.888 user 0m11.758s 00:07:40.888 sys 0m5.263s 00:07:40.888 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.888 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:40.888 ************************************ 00:07:40.888 END TEST nvmf_abort 00:07:40.888 ************************************ 00:07:40.888 20:27:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:40.888 20:27:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:40.888 20:27:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.888 20:27:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:40.888 ************************************ 00:07:40.888 START TEST nvmf_ns_hotplug_stress 00:07:40.888 ************************************ 00:07:40.888 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:41.149 * Looking for test storage... 00:07:41.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:41.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.149 --rc genhtml_branch_coverage=1 00:07:41.149 --rc genhtml_function_coverage=1 00:07:41.149 --rc genhtml_legend=1 00:07:41.149 --rc geninfo_all_blocks=1 00:07:41.149 --rc geninfo_unexecuted_blocks=1 00:07:41.149 00:07:41.149 ' 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:41.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.149 --rc genhtml_branch_coverage=1 00:07:41.149 --rc genhtml_function_coverage=1 00:07:41.149 --rc genhtml_legend=1 00:07:41.149 --rc geninfo_all_blocks=1 00:07:41.149 --rc geninfo_unexecuted_blocks=1 00:07:41.149 00:07:41.149 ' 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:41.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.149 --rc genhtml_branch_coverage=1 00:07:41.149 --rc genhtml_function_coverage=1 00:07:41.149 --rc genhtml_legend=1 00:07:41.149 --rc geninfo_all_blocks=1 00:07:41.149 --rc geninfo_unexecuted_blocks=1 00:07:41.149 00:07:41.149 ' 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:41.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.149 --rc genhtml_branch_coverage=1 00:07:41.149 --rc genhtml_function_coverage=1 00:07:41.149 --rc genhtml_legend=1 00:07:41.149 --rc geninfo_all_blocks=1 00:07:41.149 --rc geninfo_unexecuted_blocks=1 00:07:41.149 00:07:41.149 ' 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:41.149 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:41.150 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:41.150 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:47.729 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:47.729 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:47.729 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:47.729 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:47.729 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:47.729 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:47.729 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:47.729 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:47.729 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:47.730 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:47.730 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:47.730 Found net devices under 0000:af:00.0: cvl_0_0 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:47.730 Found net devices under 0000:af:00.1: cvl_0_1 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:47.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:47.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:07:47.730 00:07:47.730 --- 10.0.0.2 ping statistics --- 00:07:47.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.730 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:47.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:47.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:07:47.730 00:07:47.730 --- 10.0.0.1 ping statistics --- 00:07:47.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.730 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:47.730 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:47.731 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:47.731 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:47.731 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:47.731 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:47.731 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:47.731 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:47.731 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:47.731 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:47.731 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:47.731 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:47.731 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:47.731 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=179048 00:07:47.731 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 179048 00:07:47.731 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:47.731 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 179048 ']' 00:07:47.731 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.731 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.731 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.731 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.731 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:47.731 [2024-12-05 20:27:40.578652] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:07:47.731 [2024-12-05 20:27:40.578690] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.731 [2024-12-05 20:27:40.650021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:47.731 [2024-12-05 20:27:40.688797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:47.731 [2024-12-05 20:27:40.688831] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:47.731 [2024-12-05 20:27:40.688838] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:47.731 [2024-12-05 20:27:40.688844] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:47.731 [2024-12-05 20:27:40.688849] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:47.731 [2024-12-05 20:27:40.690226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.731 [2024-12-05 20:27:40.690337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.731 [2024-12-05 20:27:40.690339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.731 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.731 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:47.731 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:47.731 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:47.731 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:47.731 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.731 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:47.731 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:47.731 [2024-12-05 20:27:40.983009] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.731 20:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:47.990 20:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:47.990 [2024-12-05 20:27:41.364371] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:47.990 20:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:48.249 20:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:48.508 Malloc0 00:07:48.508 20:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:48.508 Delay0 00:07:48.768 20:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.768 20:27:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:49.028 NULL1 00:07:49.028 20:27:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:49.287 20:27:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:49.287 20:27:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=179348 00:07:49.287 20:27:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:07:49.287 20:27:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.287 20:27:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.546 20:27:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:49.546 20:27:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:49.806 true 00:07:49.806 20:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:07:49.806 20:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.066 20:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.066 20:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:50.066 20:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:50.326 true 00:07:50.326 20:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:07:50.326 20:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.586 20:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.845 20:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:50.845 20:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:50.845 true 00:07:50.845 20:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:07:50.845 20:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.104 20:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.364 20:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:51.364 20:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:51.624 true 00:07:51.624 20:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:07:51.624 20:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.624 20:27:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.883 20:27:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:51.883 20:27:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:52.141 true 00:07:52.141 20:27:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:07:52.141 20:27:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.399 20:27:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.399 20:27:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:52.399 20:27:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:52.657 true 00:07:52.657 20:27:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:07:52.657 20:27:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.916 20:27:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.176 20:27:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:53.176 20:27:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:53.176 true 00:07:53.176 20:27:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:07:53.176 20:27:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.435 20:27:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.694 20:27:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:53.694 20:27:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:53.954 true 00:07:53.954 20:27:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:07:53.954 20:27:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.213 20:27:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.213 20:27:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:54.213 20:27:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:54.471 true 00:07:54.471 20:27:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:07:54.472 20:27:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.731 20:27:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.731 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:54.731 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:54.991 true 00:07:54.991 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:07:54.991 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.250 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.511 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:55.511 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:55.511 true 00:07:55.770 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:07:55.770 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.770 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.030 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:56.030 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:56.290 true 00:07:56.290 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:07:56.290 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.550 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.550 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:56.550 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:56.809 true 00:07:56.809 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:07:56.809 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.068 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.325 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:57.325 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:57.325 true 00:07:57.325 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:07:57.325 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.584 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.843 20:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:57.843 20:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:58.103 true 00:07:58.103 20:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:07:58.103 20:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.363 20:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.363 20:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:58.363 20:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:58.622 true 00:07:58.622 20:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:07:58.622 20:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.881 20:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.141 20:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:59.141 20:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:59.141 true 00:07:59.141 20:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:07:59.141 20:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.400 20:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.660 20:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:59.660 20:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:59.921 true 00:07:59.921 20:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:07:59.921 20:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.181 20:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.181 20:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:00.181 20:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:00.454 true 00:08:00.454 20:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:00.455 20:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.714 20:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.973 20:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:00.973 20:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:00.973 true 00:08:00.973 20:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:00.973 20:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.232 20:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.491 20:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:01.491 20:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:01.750 true 00:08:01.750 20:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:01.750 20:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.009 20:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.009 20:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:02.009 20:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:02.267 true 00:08:02.267 20:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:02.267 20:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.534 20:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.792 20:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:02.792 20:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:02.792 true 00:08:02.792 20:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:02.792 20:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.051 20:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.326 20:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:03.326 20:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:03.586 true 00:08:03.586 20:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:03.586 20:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.586 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.845 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:03.845 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:04.104 true 00:08:04.104 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:04.104 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.364 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.623 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:04.624 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:04.624 true 00:08:04.624 20:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:04.624 20:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.883 20:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.142 20:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:05.142 20:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:05.402 true 00:08:05.402 20:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:05.402 20:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.402 20:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.660 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:05.660 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:05.919 true 00:08:05.919 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:05.919 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.178 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.438 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:06.438 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:06.438 true 00:08:06.438 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:06.438 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.696 20:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.956 20:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:08:06.956 20:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:08:07.216 true 00:08:07.216 20:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:07.216 20:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.475 20:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.475 20:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:08:07.475 20:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:08:07.734 true 00:08:07.734 20:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:07.734 20:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.994 20:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.253 20:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:08:08.253 20:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:08:08.253 true 00:08:08.253 20:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:08.253 20:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.513 20:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.772 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:08:08.772 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:08:08.772 true 00:08:09.032 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:09.032 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.032 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.291 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:08:09.291 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:08:09.550 true 00:08:09.550 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:09.550 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.810 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.070 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:08:10.070 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:08:10.070 true 00:08:10.070 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:10.070 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.329 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.589 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:08:10.589 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:08:10.589 true 00:08:10.848 20:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:10.849 20:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.849 20:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.108 20:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:08:11.108 20:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:08:11.372 true 00:08:11.372 20:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:11.372 20:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.632 20:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.632 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:08:11.632 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:08:11.890 true 00:08:11.890 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:11.890 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.149 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.408 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:08:12.408 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:08:12.408 true 00:08:12.668 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:12.668 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.668 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.928 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:08:12.928 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:08:13.187 true 00:08:13.187 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:13.187 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.447 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.447 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:08:13.447 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:08:13.706 true 00:08:13.706 20:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:13.706 20:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.965 20:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.224 20:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:08:14.224 20:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:08:14.224 true 00:08:14.224 20:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:14.224 20:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.484 20:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.743 20:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:08:14.743 20:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:08:15.003 true 00:08:15.003 20:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:15.003 20:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.263 20:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.263 20:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:08:15.263 20:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:08:15.523 true 00:08:15.523 20:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:15.523 20:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.782 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.042 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:08:16.042 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:08:16.042 true 00:08:16.302 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:16.302 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.302 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.561 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:08:16.561 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:08:16.820 true 00:08:16.820 20:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:16.821 20:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.080 20:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.080 20:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:08:17.080 20:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:08:17.341 true 00:08:17.341 20:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:17.341 20:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.600 20:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.859 20:28:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:08:17.859 20:28:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:08:17.859 true 00:08:18.126 20:28:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:18.126 20:28:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.126 20:28:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.385 20:28:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:08:18.385 20:28:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:08:18.644 true 00:08:18.644 20:28:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:18.644 20:28:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.903 20:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.903 20:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:08:18.903 20:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:08:19.163 true 00:08:19.163 20:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:19.163 20:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.423 20:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.423 Initializing NVMe Controllers 00:08:19.423 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:19.423 Controller IO queue size 128, less than required. 00:08:19.423 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:19.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:19.423 Initialization complete. Launching workers. 00:08:19.423 ======================================================== 00:08:19.423 Latency(us) 00:08:19.423 Device Information : IOPS MiB/s Average min max 00:08:19.423 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 29831.17 14.57 4290.68 1987.35 8037.13 00:08:19.423 ======================================================== 00:08:19.423 Total : 29831.17 14.57 4290.68 1987.35 8037.13 00:08:19.423 00:08:19.682 20:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:08:19.682 20:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:08:19.682 true 00:08:19.682 20:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 179348 00:08:19.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (179348) - No such process 00:08:19.682 20:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 179348 00:08:19.682 20:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.941 20:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:20.200 20:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:20.200 20:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:20.200 20:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:20.200 20:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:20.200 20:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:20.458 null0 00:08:20.458 20:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:20.458 20:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:20.458 20:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:20.458 null1 00:08:20.458 20:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:20.458 20:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:20.458 20:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:20.716 null2 00:08:20.716 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:20.716 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:20.716 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:20.974 null3 00:08:20.974 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:20.974 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:20.974 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:20.974 null4 00:08:20.974 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:21.232 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:21.232 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:21.232 null5 00:08:21.232 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:21.232 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:21.232 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:21.490 null6 00:08:21.490 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:21.490 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:21.490 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:21.766 null7 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 185534 185535 185537 185539 185541 185542 185544 185546 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.766 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:21.766 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:21.766 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:21.766 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:21.766 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:21.766 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:21.766 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:21.766 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.024 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:22.024 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.024 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.024 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:22.024 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.024 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.024 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:22.024 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.024 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.024 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:22.024 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.024 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.024 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.024 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.024 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:22.024 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:22.024 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.024 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.024 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:22.024 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.024 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.024 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:22.024 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.024 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.024 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:22.282 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:22.282 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:22.282 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:22.282 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:22.282 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.282 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:22.282 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:22.282 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:22.282 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.282 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.282 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:22.542 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.542 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.542 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:22.542 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.542 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.542 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:22.542 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.542 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.542 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:22.542 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.542 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.542 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:22.542 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.542 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.542 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:22.542 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.542 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.542 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:22.542 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.542 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.542 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:22.542 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:22.542 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:22.542 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:22.542 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:22.542 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:22.542 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.542 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:22.542 20:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:22.802 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.802 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.802 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:22.802 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.802 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.802 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:22.802 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.802 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.802 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:22.802 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.802 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.802 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:22.802 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.802 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.802 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.802 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.802 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:22.802 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:22.802 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.802 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.802 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:22.802 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.802 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.802 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:23.061 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:23.061 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:23.061 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:23.061 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:23.061 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.061 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:23.061 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:23.061 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:23.061 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.061 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.061 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:23.061 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.061 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.061 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:23.320 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.320 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.320 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:23.320 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.320 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.321 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:23.321 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.321 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.321 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:23.321 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.321 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.321 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:23.321 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.321 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.321 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:23.321 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.321 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.321 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:23.321 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:23.321 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:23.321 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.321 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:23.321 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:23.321 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:23.321 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:23.321 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:23.580 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.580 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.580 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:23.580 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.580 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.580 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:23.580 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.580 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.580 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:23.580 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.580 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.580 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:23.580 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.580 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.580 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:23.580 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.580 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.580 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:23.580 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.580 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.580 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:23.580 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.580 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.580 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:23.839 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:23.839 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.839 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:23.839 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:23.839 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:23.839 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:23.839 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:23.839 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:23.839 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.839 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.839 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:23.839 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.839 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.839 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.839 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:23.839 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.839 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:24.098 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.098 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.098 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:24.098 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.098 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.098 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:24.098 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.098 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.098 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:24.098 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.098 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.098 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:24.099 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.099 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.099 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:24.099 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:24.099 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.099 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:24.099 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:24.099 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:24.099 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:24.099 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:24.099 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:24.357 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.357 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.357 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:24.357 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.357 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.357 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:24.357 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.357 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.357 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:24.357 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.357 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.357 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:24.357 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.357 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.357 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.357 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.357 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:24.357 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:24.357 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.357 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.357 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:24.358 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.358 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.358 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:24.627 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:24.627 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.627 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:24.627 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:24.627 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:24.627 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:24.627 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:24.627 20:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:24.627 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.627 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.627 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:24.627 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.627 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.627 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:24.627 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.627 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.627 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:24.627 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.627 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.627 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:24.627 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.627 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.627 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:24.627 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.627 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.627 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:24.627 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.627 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.627 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:24.627 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.627 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.627 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:24.886 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:24.886 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.886 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:24.886 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:24.886 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:24.886 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:24.886 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:24.886 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:25.146 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.146 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.146 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:25.146 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.146 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.146 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:25.146 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.146 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.146 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:25.146 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.146 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.146 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:25.146 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.146 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.146 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:25.146 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.146 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.146 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:25.146 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.146 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.146 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:25.146 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.146 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.146 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:25.406 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:25.406 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:25.406 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:25.406 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:25.406 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:25.406 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:25.406 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.406 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:25.406 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.406 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.406 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.406 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.406 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.406 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.406 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.406 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.666 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.666 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.666 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.666 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.666 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.666 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.666 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.666 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.666 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:25.666 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:25.666 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:25.666 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:25.666 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:25.666 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:25.666 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:25.666 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:25.666 rmmod nvme_tcp 00:08:25.666 rmmod nvme_fabrics 00:08:25.666 rmmod nvme_keyring 00:08:25.666 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:25.666 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:25.666 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:25.666 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 179048 ']' 00:08:25.666 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 179048 00:08:25.666 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 179048 ']' 00:08:25.667 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 179048 00:08:25.667 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:08:25.667 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:25.667 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 179048 00:08:25.667 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:25.667 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:25.667 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 179048' 00:08:25.667 killing process with pid 179048 00:08:25.667 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 179048 00:08:25.667 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 179048 00:08:25.927 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:25.927 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:25.927 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:25.927 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:25.927 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:08:25.927 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:25.927 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:08:25.927 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:25.927 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:25.927 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.927 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:25.927 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.836 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:27.836 00:08:27.836 real 0m46.940s 00:08:27.836 user 3m17.771s 00:08:27.836 sys 0m17.052s 00:08:27.836 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.836 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:27.836 ************************************ 00:08:27.836 END TEST nvmf_ns_hotplug_stress 00:08:27.836 ************************************ 00:08:27.836 20:28:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:27.837 20:28:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:27.837 20:28:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.837 20:28:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:28.097 ************************************ 00:08:28.097 START TEST nvmf_delete_subsystem 00:08:28.097 ************************************ 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:28.097 * Looking for test storage... 00:08:28.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:28.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.097 --rc genhtml_branch_coverage=1 00:08:28.097 --rc genhtml_function_coverage=1 00:08:28.097 --rc genhtml_legend=1 00:08:28.097 --rc geninfo_all_blocks=1 00:08:28.097 --rc geninfo_unexecuted_blocks=1 00:08:28.097 00:08:28.097 ' 00:08:28.097 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:28.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.097 --rc genhtml_branch_coverage=1 00:08:28.097 --rc genhtml_function_coverage=1 00:08:28.097 --rc genhtml_legend=1 00:08:28.097 --rc geninfo_all_blocks=1 00:08:28.097 --rc geninfo_unexecuted_blocks=1 00:08:28.097 00:08:28.097 ' 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:28.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.098 --rc genhtml_branch_coverage=1 00:08:28.098 --rc genhtml_function_coverage=1 00:08:28.098 --rc genhtml_legend=1 00:08:28.098 --rc geninfo_all_blocks=1 00:08:28.098 --rc geninfo_unexecuted_blocks=1 00:08:28.098 00:08:28.098 ' 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:28.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.098 --rc genhtml_branch_coverage=1 00:08:28.098 --rc genhtml_function_coverage=1 00:08:28.098 --rc genhtml_legend=1 00:08:28.098 --rc geninfo_all_blocks=1 00:08:28.098 --rc geninfo_unexecuted_blocks=1 00:08:28.098 00:08:28.098 ' 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:28.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:28.098 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:34.679 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:34.679 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:34.679 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:34.680 Found net devices under 0000:af:00.0: cvl_0_0 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:34.680 Found net devices under 0000:af:00.1: cvl_0_1 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:34.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.486 ms 00:08:34.680 00:08:34.680 --- 10.0.0.2 ping statistics --- 00:08:34.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.680 rtt min/avg/max/mdev = 0.486/0.486/0.486/0.000 ms 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:34.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:08:34.680 00:08:34.680 --- 10.0.0.1 ping statistics --- 00:08:34.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.680 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=190239 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 190239 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 190239 ']' 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.680 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:34.680 [2024-12-05 20:28:27.593455] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:08:34.680 [2024-12-05 20:28:27.593498] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.680 [2024-12-05 20:28:27.667622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:34.680 [2024-12-05 20:28:27.705999] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.680 [2024-12-05 20:28:27.706034] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.680 [2024-12-05 20:28:27.706040] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.680 [2024-12-05 20:28:27.706046] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.680 [2024-12-05 20:28:27.706050] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.680 [2024-12-05 20:28:27.707211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.680 [2024-12-05 20:28:27.707213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.251 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:35.251 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:35.251 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:35.251 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:35.251 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:35.251 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.251 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:35.251 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.251 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:35.251 [2024-12-05 20:28:28.440924] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:35.251 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.251 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:35.251 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.251 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:35.251 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.251 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:35.251 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.251 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:35.251 [2024-12-05 20:28:28.461087] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:35.251 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.251 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:35.251 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.251 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:35.251 NULL1 00:08:35.252 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.252 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:35.252 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.252 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:35.252 Delay0 00:08:35.252 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.252 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:35.252 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.252 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:35.252 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.252 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=190288 00:08:35.252 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:35.252 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:35.252 [2024-12-05 20:28:28.572752] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:37.166 20:28:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:37.166 20:28:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.166 20:28:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:37.492 Write completed with error (sct=0, sc=8) 00:08:37.492 Read completed with error (sct=0, sc=8) 00:08:37.492 Read completed with error (sct=0, sc=8) 00:08:37.492 starting I/O failed: -6 00:08:37.492 Write completed with error (sct=0, sc=8) 00:08:37.492 Write completed with error (sct=0, sc=8) 00:08:37.492 Write completed with error (sct=0, sc=8) 00:08:37.492 Read completed with error (sct=0, sc=8) 00:08:37.492 starting I/O failed: -6 00:08:37.492 Write completed with error (sct=0, sc=8) 00:08:37.492 Write completed with error (sct=0, sc=8) 00:08:37.492 Read completed with error (sct=0, sc=8) 00:08:37.492 Read completed with error (sct=0, sc=8) 00:08:37.492 starting I/O failed: -6 00:08:37.492 Read completed with error (sct=0, sc=8) 00:08:37.492 Read completed with error (sct=0, sc=8) 00:08:37.492 Read completed with error (sct=0, sc=8) 00:08:37.492 Write completed with error (sct=0, sc=8) 00:08:37.492 starting I/O failed: -6 00:08:37.492 Read completed with error (sct=0, sc=8) 00:08:37.492 Read completed with error (sct=0, sc=8) 00:08:37.492 Write completed with error (sct=0, sc=8) 00:08:37.492 Read completed with error (sct=0, sc=8) 00:08:37.492 starting I/O failed: -6 00:08:37.492 Read completed with error (sct=0, sc=8) 00:08:37.492 Write completed with error (sct=0, sc=8) 00:08:37.492 Write completed with error (sct=0, sc=8) 00:08:37.492 Write completed with error (sct=0, sc=8) 00:08:37.492 starting I/O failed: -6 00:08:37.492 Read completed with error (sct=0, sc=8) 00:08:37.492 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 starting I/O failed: -6 00:08:37.493 starting I/O failed: -6 00:08:37.493 starting I/O failed: -6 00:08:37.493 starting I/O failed: -6 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 starting I/O failed: -6 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 [2024-12-05 20:28:30.691969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc78000d4d0 is same with the state(6) to be set 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Write completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.493 Read completed with error (sct=0, sc=8) 00:08:37.494 Write completed with error (sct=0, sc=8) 00:08:37.494 Write completed with error (sct=0, sc=8) 00:08:37.494 Read completed with error (sct=0, sc=8) 00:08:37.494 Write completed with error (sct=0, sc=8) 00:08:37.494 Write completed with error (sct=0, sc=8) 00:08:37.494 Read completed with error (sct=0, sc=8) 00:08:37.494 Read completed with error (sct=0, sc=8) 00:08:37.494 Write completed with error (sct=0, sc=8) 00:08:37.494 Read completed with error (sct=0, sc=8) 00:08:37.494 Write completed with error (sct=0, sc=8) 00:08:37.494 Read completed with error (sct=0, sc=8) 00:08:38.430 [2024-12-05 20:28:31.665768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfb5f0 is same with the state(6) to be set 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Write completed with error (sct=0, sc=8) 00:08:38.430 Write completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Write completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Write completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Write completed with error (sct=0, sc=8) 00:08:38.430 Write completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Write completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Write completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 [2024-12-05 20:28:31.692153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfa2c0 is same with the state(6) to be set 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Write completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Write completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Write completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Write completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.430 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Write completed with error (sct=0, sc=8) 00:08:38.431 Write completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Write completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 [2024-12-05 20:28:31.692326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9f00 is same with the state(6) to be set 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Write completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Write completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Write completed with error (sct=0, sc=8) 00:08:38.431 Write completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Write completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Write completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Write completed with error (sct=0, sc=8) 00:08:38.431 [2024-12-05 20:28:31.694476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc78000d020 is same with the state(6) to be set 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Write completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Write completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Write completed with error (sct=0, sc=8) 00:08:38.431 Write completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 Read completed with error (sct=0, sc=8) 00:08:38.431 [2024-12-05 20:28:31.695115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc78000d800 is same with the state(6) to be set 00:08:38.431 Initializing NVMe Controllers 00:08:38.431 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:38.431 Controller IO queue size 128, less than required. 00:08:38.431 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:38.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:38.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:38.431 Initialization complete. Launching workers. 00:08:38.431 ======================================================== 00:08:38.431 Latency(us) 00:08:38.431 Device Information : IOPS MiB/s Average min max 00:08:38.431 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 183.75 0.09 910783.90 387.04 1007223.04 00:08:38.431 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.33 0.08 909635.30 255.09 1009444.05 00:08:38.431 ======================================================== 00:08:38.431 Total : 347.08 0.17 910243.38 255.09 1009444.05 00:08:38.431 00:08:38.431 [2024-12-05 20:28:31.695719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfb5f0 (9): Bad file descriptor 00:08:38.431 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:38.431 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.431 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:38.431 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 190288 00:08:38.431 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:38.999 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:38.999 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 190288 00:08:38.999 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (190288) - No such process 00:08:38.999 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 190288 00:08:38.999 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:38.999 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 190288 00:08:38.999 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:38.999 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:38.999 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:38.999 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:38.999 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 190288 00:08:38.999 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:38.999 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:38.999 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:38.999 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:38.999 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:38.999 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.999 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:38.999 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.999 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:38.999 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.999 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:38.999 [2024-12-05 20:28:32.220668] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:38.999 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.999 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:38.999 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.999 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:38.999 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.999 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=191060 00:08:38.999 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:38.999 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:38.999 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 191060 00:08:39.000 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:39.000 [2024-12-05 20:28:32.304589] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:39.568 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:39.568 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 191060 00:08:39.568 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:39.827 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:39.827 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 191060 00:08:39.827 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:40.396 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:40.396 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 191060 00:08:40.396 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:40.965 20:28:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:40.965 20:28:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 191060 00:08:40.965 20:28:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:41.534 20:28:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:41.534 20:28:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 191060 00:08:41.534 20:28:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:42.103 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:42.103 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 191060 00:08:42.103 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:42.103 Initializing NVMe Controllers 00:08:42.103 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:42.103 Controller IO queue size 128, less than required. 00:08:42.103 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:42.103 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:42.103 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:42.103 Initialization complete. Launching workers. 00:08:42.103 ======================================================== 00:08:42.103 Latency(us) 00:08:42.103 Device Information : IOPS MiB/s Average min max 00:08:42.103 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001902.93 1000119.31 1041494.15 00:08:42.103 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003341.53 1000121.48 1008945.01 00:08:42.104 ======================================================== 00:08:42.104 Total : 256.00 0.12 1002622.23 1000119.31 1041494.15 00:08:42.104 00:08:42.363 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:42.363 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 191060 00:08:42.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (191060) - No such process 00:08:42.363 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 191060 00:08:42.363 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:42.363 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:42.363 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:42.363 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:42.363 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:42.363 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:42.363 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:42.363 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:42.363 rmmod nvme_tcp 00:08:42.363 rmmod nvme_fabrics 00:08:42.622 rmmod nvme_keyring 00:08:42.623 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:42.623 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:42.623 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:42.623 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 190239 ']' 00:08:42.623 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 190239 00:08:42.623 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 190239 ']' 00:08:42.623 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 190239 00:08:42.623 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:42.623 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.623 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 190239 00:08:42.623 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:42.623 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:42.623 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 190239' 00:08:42.623 killing process with pid 190239 00:08:42.623 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 190239 00:08:42.623 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 190239 00:08:42.623 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:42.623 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:42.623 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:42.623 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:42.623 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:42.623 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:42.623 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:42.623 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:42.623 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:42.623 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.623 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.623 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:45.167 00:08:45.167 real 0m16.829s 00:08:45.167 user 0m30.611s 00:08:45.167 sys 0m5.433s 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:45.167 ************************************ 00:08:45.167 END TEST nvmf_delete_subsystem 00:08:45.167 ************************************ 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:45.167 ************************************ 00:08:45.167 START TEST nvmf_host_management 00:08:45.167 ************************************ 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:45.167 * Looking for test storage... 00:08:45.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:45.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.167 --rc genhtml_branch_coverage=1 00:08:45.167 --rc genhtml_function_coverage=1 00:08:45.167 --rc genhtml_legend=1 00:08:45.167 --rc geninfo_all_blocks=1 00:08:45.167 --rc geninfo_unexecuted_blocks=1 00:08:45.167 00:08:45.167 ' 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:45.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.167 --rc genhtml_branch_coverage=1 00:08:45.167 --rc genhtml_function_coverage=1 00:08:45.167 --rc genhtml_legend=1 00:08:45.167 --rc geninfo_all_blocks=1 00:08:45.167 --rc geninfo_unexecuted_blocks=1 00:08:45.167 00:08:45.167 ' 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:45.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.167 --rc genhtml_branch_coverage=1 00:08:45.167 --rc genhtml_function_coverage=1 00:08:45.167 --rc genhtml_legend=1 00:08:45.167 --rc geninfo_all_blocks=1 00:08:45.167 --rc geninfo_unexecuted_blocks=1 00:08:45.167 00:08:45.167 ' 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:45.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.167 --rc genhtml_branch_coverage=1 00:08:45.167 --rc genhtml_function_coverage=1 00:08:45.167 --rc genhtml_legend=1 00:08:45.167 --rc geninfo_all_blocks=1 00:08:45.167 --rc geninfo_unexecuted_blocks=1 00:08:45.167 00:08:45.167 ' 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.167 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.168 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.168 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:45.168 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.168 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:45.168 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:45.168 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:45.168 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.168 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.168 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.168 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:45.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:45.168 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:45.168 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:45.168 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:45.168 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:45.168 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:45.168 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:45.168 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:45.168 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.168 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:45.168 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:45.168 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:45.168 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.168 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.168 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.168 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:45.168 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:45.168 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:45.168 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:51.749 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:51.749 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:51.749 Found net devices under 0000:af:00.0: cvl_0_0 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:51.749 Found net devices under 0000:af:00.1: cvl_0_1 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:51.749 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:51.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:08:51.750 00:08:51.750 --- 10.0.0.2 ping statistics --- 00:08:51.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.750 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:51.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:08:51.750 00:08:51.750 --- 10.0.0.1 ping statistics --- 00:08:51.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.750 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=195354 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 195354 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 195354 ']' 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.750 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:51.750 [2024-12-05 20:28:44.454443] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:08:51.750 [2024-12-05 20:28:44.454490] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.750 [2024-12-05 20:28:44.531411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:51.750 [2024-12-05 20:28:44.573956] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.750 [2024-12-05 20:28:44.573992] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.750 [2024-12-05 20:28:44.574000] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.750 [2024-12-05 20:28:44.574006] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.750 [2024-12-05 20:28:44.574011] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.750 [2024-12-05 20:28:44.575615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.750 [2024-12-05 20:28:44.575727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:51.750 [2024-12-05 20:28:44.575838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.750 [2024-12-05 20:28:44.575839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:52.009 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:52.009 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:52.009 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:52.009 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:52.009 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:52.009 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.009 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:52.009 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.009 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:52.009 [2024-12-05 20:28:45.310106] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:52.009 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.009 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:52.009 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:52.009 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:52.009 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:52.009 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:52.009 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:52.009 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.009 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:52.009 Malloc0 00:08:52.009 [2024-12-05 20:28:45.386121] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:52.009 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.009 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:52.009 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:52.009 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:52.009 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=195642 00:08:52.009 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 195642 /var/tmp/bdevperf.sock 00:08:52.010 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 195642 ']' 00:08:52.010 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:52.010 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:52.010 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:52.010 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.010 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:52.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:52.010 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:52.010 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.010 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:52.010 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:52.010 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:52.010 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:52.010 { 00:08:52.010 "params": { 00:08:52.010 "name": "Nvme$subsystem", 00:08:52.010 "trtype": "$TEST_TRANSPORT", 00:08:52.010 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.010 "adrfam": "ipv4", 00:08:52.010 "trsvcid": "$NVMF_PORT", 00:08:52.010 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.010 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.010 "hdgst": ${hdgst:-false}, 00:08:52.010 "ddgst": ${ddgst:-false} 00:08:52.010 }, 00:08:52.010 "method": "bdev_nvme_attach_controller" 00:08:52.010 } 00:08:52.010 EOF 00:08:52.010 )") 00:08:52.010 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:52.010 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:52.010 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:52.269 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:52.270 "params": { 00:08:52.270 "name": "Nvme0", 00:08:52.270 "trtype": "tcp", 00:08:52.270 "traddr": "10.0.0.2", 00:08:52.270 "adrfam": "ipv4", 00:08:52.270 "trsvcid": "4420", 00:08:52.270 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:52.270 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:52.270 "hdgst": false, 00:08:52.270 "ddgst": false 00:08:52.270 }, 00:08:52.270 "method": "bdev_nvme_attach_controller" 00:08:52.270 }' 00:08:52.270 [2024-12-05 20:28:45.481268] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:08:52.270 [2024-12-05 20:28:45.481309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid195642 ] 00:08:52.270 [2024-12-05 20:28:45.554772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.270 [2024-12-05 20:28:45.593190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.529 Running I/O for 10 seconds... 00:08:53.101 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.101 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:53.101 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:53.101 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.101 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.101 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.101 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:53.101 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:53.102 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:53.102 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:53.102 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:53.102 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:53.102 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:53.102 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:53.102 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:53.102 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:53.102 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.102 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.102 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.102 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1155 00:08:53.102 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1155 -ge 100 ']' 00:08:53.102 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:53.102 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:53.102 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:53.102 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:53.102 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.102 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.102 [2024-12-05 20:28:46.367165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:53.102 [2024-12-05 20:28:46.367203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.102 [2024-12-05 20:28:46.367212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:53.102 [2024-12-05 20:28:46.367219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.102 [2024-12-05 20:28:46.367226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:53.102 [2024-12-05 20:28:46.367232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.102 [2024-12-05 20:28:46.367239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:53.102 [2024-12-05 20:28:46.367245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.102 [2024-12-05 20:28:46.367251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x688630 is same with the state(6) to be set 00:08:53.102 [2024-12-05 20:28:46.368073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.102 [2024-12-05 20:28:46.368085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.102 [2024-12-05 20:28:46.368099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.102 [2024-12-05 20:28:46.368106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.102 [2024-12-05 20:28:46.368119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.102 [2024-12-05 20:28:46.368126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.102 [2024-12-05 20:28:46.368133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.102 [2024-12-05 20:28:46.368140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.102 [2024-12-05 20:28:46.368147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.102 [2024-12-05 20:28:46.368153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.102 [2024-12-05 20:28:46.368161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.102 [2024-12-05 20:28:46.368167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.102 [2024-12-05 20:28:46.368176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.102 [2024-12-05 20:28:46.368182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.102 [2024-12-05 20:28:46.368190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.102 [2024-12-05 20:28:46.368196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.102 [2024-12-05 20:28:46.368203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.102 [2024-12-05 20:28:46.368209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.102 [2024-12-05 20:28:46.368216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.102 [2024-12-05 20:28:46.368224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.102 [2024-12-05 20:28:46.368232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.102 [2024-12-05 20:28:46.368238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.102 [2024-12-05 20:28:46.368246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.102 [2024-12-05 20:28:46.368252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.102 [2024-12-05 20:28:46.368260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.102 [2024-12-05 20:28:46.368267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.102 [2024-12-05 20:28:46.368274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.102 [2024-12-05 20:28:46.368281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.102 [2024-12-05 20:28:46.368288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.102 [2024-12-05 20:28:46.368296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.102 [2024-12-05 20:28:46.368304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.102 [2024-12-05 20:28:46.368315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.102 [2024-12-05 20:28:46.368322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.102 [2024-12-05 20:28:46.368329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.102 [2024-12-05 20:28:46.368336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.102 [2024-12-05 20:28:46.368342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.102 [2024-12-05 20:28:46.368350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.103 [2024-12-05 20:28:46.368355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.103 [2024-12-05 20:28:46.368363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.103 [2024-12-05 20:28:46.368368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.103 [2024-12-05 20:28:46.368376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.103 [2024-12-05 20:28:46.368382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.103 [2024-12-05 20:28:46.368390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.103 [2024-12-05 20:28:46.368396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.103 [2024-12-05 20:28:46.368403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.103 [2024-12-05 20:28:46.368409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.103 [2024-12-05 20:28:46.368416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.103 [2024-12-05 20:28:46.368422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.103 [2024-12-05 20:28:46.368429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.103 [2024-12-05 20:28:46.368436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.103 [2024-12-05 20:28:46.368443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.103 [2024-12-05 20:28:46.368449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.103 [2024-12-05 20:28:46.368457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.103 [2024-12-05 20:28:46.368463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.103 [2024-12-05 20:28:46.368472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.103 [2024-12-05 20:28:46.368478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.103 [2024-12-05 20:28:46.368486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.103 [2024-12-05 20:28:46.368492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.103 [2024-12-05 20:28:46.368500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.103 [2024-12-05 20:28:46.368506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.103 [2024-12-05 20:28:46.368514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.103 [2024-12-05 20:28:46.368520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.103 [2024-12-05 20:28:46.368527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.103 [2024-12-05 20:28:46.368534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.103 [2024-12-05 20:28:46.368542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.103 [2024-12-05 20:28:46.368548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.103 [2024-12-05 20:28:46.368555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.103 [2024-12-05 20:28:46.368562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.103 [2024-12-05 20:28:46.368569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.103 [2024-12-05 20:28:46.368575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.103 [2024-12-05 20:28:46.368582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.103 [2024-12-05 20:28:46.368588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.103 [2024-12-05 20:28:46.368595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.103 [2024-12-05 20:28:46.368602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.103 [2024-12-05 20:28:46.368609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.103 [2024-12-05 20:28:46.368615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.103 [2024-12-05 20:28:46.368623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.103 [2024-12-05 20:28:46.368628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.103 [2024-12-05 20:28:46.368636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.103 [2024-12-05 20:28:46.368643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.103 [2024-12-05 20:28:46.368651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.103 [2024-12-05 20:28:46.368657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.103 [2024-12-05 20:28:46.368664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.103 [2024-12-05 20:28:46.368670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.103 [2024-12-05 20:28:46.368677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.103 [2024-12-05 20:28:46.368683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.103 [2024-12-05 20:28:46.368690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.103 [2024-12-05 20:28:46.368696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.103 [2024-12-05 20:28:46.368704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.103 [2024-12-05 20:28:46.368710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.103 [2024-12-05 20:28:46.368717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.103 [2024-12-05 20:28:46.368723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.103 [2024-12-05 20:28:46.368730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.103 [2024-12-05 20:28:46.368736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.103 [2024-12-05 20:28:46.368743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.103 [2024-12-05 20:28:46.368751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.103 [2024-12-05 20:28:46.368759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.104 [2024-12-05 20:28:46.368765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.104 [2024-12-05 20:28:46.368772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.104 [2024-12-05 20:28:46.368778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.104 [2024-12-05 20:28:46.368785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.104 [2024-12-05 20:28:46.368791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.104 [2024-12-05 20:28:46.368799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.104 [2024-12-05 20:28:46.368805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.104 [2024-12-05 20:28:46.368814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.104 [2024-12-05 20:28:46.368821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.104 [2024-12-05 20:28:46.368828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.104 [2024-12-05 20:28:46.368834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.104 [2024-12-05 20:28:46.368841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.104 [2024-12-05 20:28:46.368847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.104 [2024-12-05 20:28:46.368854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.104 [2024-12-05 20:28:46.368861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.104 [2024-12-05 20:28:46.368868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.104 [2024-12-05 20:28:46.368874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.104 [2024-12-05 20:28:46.368881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.104 [2024-12-05 20:28:46.368887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.104 [2024-12-05 20:28:46.368894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.104 [2024-12-05 20:28:46.368899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.104 [2024-12-05 20:28:46.368908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.104 [2024-12-05 20:28:46.368914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.104 [2024-12-05 20:28:46.368921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.104 [2024-12-05 20:28:46.368927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.104 [2024-12-05 20:28:46.368934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.104 [2024-12-05 20:28:46.368940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.104 [2024-12-05 20:28:46.368947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.104 [2024-12-05 20:28:46.368953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.104 [2024-12-05 20:28:46.368961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.104 [2024-12-05 20:28:46.368968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.104 [2024-12-05 20:28:46.368975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a15b0 is same with the state(6) to be set 00:08:53.104 [2024-12-05 20:28:46.369861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:53.104 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.104 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:53.104 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.104 task offset: 32640 on job bdev=Nvme0n1 fails 00:08:53.104 00:08:53.104 Latency(us) 00:08:53.104 [2024-12-05T19:28:46.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.104 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:53.104 Job: Nvme0n1 ended in about 0.57 seconds with error 00:08:53.104 Verification LBA range: start 0x0 length 0x400 00:08:53.104 Nvme0n1 : 0.57 2119.20 132.45 111.54 0.00 28122.29 4081.11 24546.21 00:08:53.104 [2024-12-05T19:28:46.545Z] =================================================================================================================== 00:08:53.104 [2024-12-05T19:28:46.545Z] Total : 2119.20 132.45 111.54 0.00 28122.29 4081.11 24546.21 00:08:53.104 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.104 [2024-12-05 20:28:46.372036] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:53.104 [2024-12-05 20:28:46.372054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x688630 (9): Bad file descriptor 00:08:53.104 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.104 20:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:53.104 [2024-12-05 20:28:46.421291] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:54.065 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 195642 00:08:54.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (195642) - No such process 00:08:54.065 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:54.065 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:54.065 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:54.065 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:54.065 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:54.065 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:54.065 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:54.065 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:54.065 { 00:08:54.065 "params": { 00:08:54.065 "name": "Nvme$subsystem", 00:08:54.065 "trtype": "$TEST_TRANSPORT", 00:08:54.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:54.065 "adrfam": "ipv4", 00:08:54.065 "trsvcid": "$NVMF_PORT", 00:08:54.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:54.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:54.065 "hdgst": ${hdgst:-false}, 00:08:54.065 "ddgst": ${ddgst:-false} 00:08:54.065 }, 00:08:54.065 "method": "bdev_nvme_attach_controller" 00:08:54.066 } 00:08:54.066 EOF 00:08:54.066 )") 00:08:54.066 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:54.066 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:54.066 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:54.066 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:54.066 "params": { 00:08:54.066 "name": "Nvme0", 00:08:54.066 "trtype": "tcp", 00:08:54.066 "traddr": "10.0.0.2", 00:08:54.066 "adrfam": "ipv4", 00:08:54.066 "trsvcid": "4420", 00:08:54.066 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:54.066 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:54.066 "hdgst": false, 00:08:54.066 "ddgst": false 00:08:54.066 }, 00:08:54.066 "method": "bdev_nvme_attach_controller" 00:08:54.066 }' 00:08:54.066 [2024-12-05 20:28:47.433461] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:08:54.066 [2024-12-05 20:28:47.433507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid196015 ] 00:08:54.324 [2024-12-05 20:28:47.507631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.324 [2024-12-05 20:28:47.546505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.324 Running I/O for 1 seconds... 00:08:55.700 2176.00 IOPS, 136.00 MiB/s 00:08:55.700 Latency(us) 00:08:55.700 [2024-12-05T19:28:49.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.700 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:55.700 Verification LBA range: start 0x0 length 0x400 00:08:55.700 Nvme0n1 : 1.01 2212.19 138.26 0.00 0.00 28500.15 5749.29 24546.21 00:08:55.700 [2024-12-05T19:28:49.141Z] =================================================================================================================== 00:08:55.700 [2024-12-05T19:28:49.141Z] Total : 2212.19 138.26 0.00 0.00 28500.15 5749.29 24546.21 00:08:55.700 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:55.700 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:55.700 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:55.700 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:55.700 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:55.700 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:55.700 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:55.700 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:55.700 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:55.700 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:55.700 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:55.700 rmmod nvme_tcp 00:08:55.700 rmmod nvme_fabrics 00:08:55.700 rmmod nvme_keyring 00:08:55.700 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:55.700 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:55.700 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:55.700 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 195354 ']' 00:08:55.700 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 195354 00:08:55.700 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 195354 ']' 00:08:55.700 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 195354 00:08:55.700 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:55.700 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:55.700 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 195354 00:08:55.700 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:55.700 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:55.700 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 195354' 00:08:55.700 killing process with pid 195354 00:08:55.700 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 195354 00:08:55.700 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 195354 00:08:55.959 [2024-12-05 20:28:49.201481] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:55.959 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:55.959 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:55.959 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:55.959 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:55.959 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:55.959 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:55.959 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:55.959 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:55.959 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:55.959 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.959 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:55.959 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.862 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:58.121 00:08:58.121 real 0m13.112s 00:08:58.121 user 0m22.823s 00:08:58.121 sys 0m5.654s 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:58.121 ************************************ 00:08:58.121 END TEST nvmf_host_management 00:08:58.121 ************************************ 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:58.121 ************************************ 00:08:58.121 START TEST nvmf_lvol 00:08:58.121 ************************************ 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:58.121 * Looking for test storage... 00:08:58.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:58.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.121 --rc genhtml_branch_coverage=1 00:08:58.121 --rc genhtml_function_coverage=1 00:08:58.121 --rc genhtml_legend=1 00:08:58.121 --rc geninfo_all_blocks=1 00:08:58.121 --rc geninfo_unexecuted_blocks=1 00:08:58.121 00:08:58.121 ' 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:58.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.121 --rc genhtml_branch_coverage=1 00:08:58.121 --rc genhtml_function_coverage=1 00:08:58.121 --rc genhtml_legend=1 00:08:58.121 --rc geninfo_all_blocks=1 00:08:58.121 --rc geninfo_unexecuted_blocks=1 00:08:58.121 00:08:58.121 ' 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:58.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.121 --rc genhtml_branch_coverage=1 00:08:58.121 --rc genhtml_function_coverage=1 00:08:58.121 --rc genhtml_legend=1 00:08:58.121 --rc geninfo_all_blocks=1 00:08:58.121 --rc geninfo_unexecuted_blocks=1 00:08:58.121 00:08:58.121 ' 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:58.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.121 --rc genhtml_branch_coverage=1 00:08:58.121 --rc genhtml_function_coverage=1 00:08:58.121 --rc genhtml_legend=1 00:08:58.121 --rc geninfo_all_blocks=1 00:08:58.121 --rc geninfo_unexecuted_blocks=1 00:08:58.121 00:08:58.121 ' 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.121 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:58.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:58.382 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:04.958 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:04.958 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:04.958 Found net devices under 0000:af:00.0: cvl_0_0 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:04.958 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:04.959 Found net devices under 0000:af:00.1: cvl_0_1 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:04.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:04.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:09:04.959 00:09:04.959 --- 10.0.0.2 ping statistics --- 00:09:04.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.959 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:04.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:04.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:09:04.959 00:09:04.959 --- 10.0.0.1 ping statistics --- 00:09:04.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.959 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=200036 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 200036 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 200036 ']' 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.959 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:04.959 [2024-12-05 20:28:57.691960] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:09:04.959 [2024-12-05 20:28:57.692003] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.959 [2024-12-05 20:28:57.767786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:04.959 [2024-12-05 20:28:57.808820] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:04.959 [2024-12-05 20:28:57.808850] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:04.959 [2024-12-05 20:28:57.808856] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:04.959 [2024-12-05 20:28:57.808862] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:04.959 [2024-12-05 20:28:57.808867] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:04.959 [2024-12-05 20:28:57.810115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.959 [2024-12-05 20:28:57.810228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.959 [2024-12-05 20:28:57.810229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.218 20:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.218 20:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:09:05.218 20:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:05.218 20:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:05.218 20:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:05.218 20:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:05.218 20:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:05.478 [2024-12-05 20:28:58.700561] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.478 20:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:05.738 20:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:05.738 20:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:05.738 20:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:05.738 20:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:05.998 20:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:06.258 20:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=5846242c-6b02-407c-a94a-77a1243c2ce4 00:09:06.258 20:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5846242c-6b02-407c-a94a-77a1243c2ce4 lvol 20 00:09:06.518 20:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=0db33f16-d428-4b6a-aef0-586383451af6 00:09:06.518 20:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:06.518 20:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0db33f16-d428-4b6a-aef0-586383451af6 00:09:06.778 20:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:07.037 [2024-12-05 20:29:00.239539] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:07.037 20:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:07.037 20:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=200566 00:09:07.037 20:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:07.037 20:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:08.417 20:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 0db33f16-d428-4b6a-aef0-586383451af6 MY_SNAPSHOT 00:09:08.417 20:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7fdc5acb-8f5e-4d74-bf79-18ec5f13e6e8 00:09:08.417 20:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 0db33f16-d428-4b6a-aef0-586383451af6 30 00:09:08.677 20:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 7fdc5acb-8f5e-4d74-bf79-18ec5f13e6e8 MY_CLONE 00:09:08.937 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=52a32c4b-708b-4120-b536-91c8db2eefdb 00:09:08.937 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 52a32c4b-708b-4120-b536-91c8db2eefdb 00:09:09.508 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 200566 00:09:17.634 Initializing NVMe Controllers 00:09:17.634 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:17.634 Controller IO queue size 128, less than required. 00:09:17.634 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:17.634 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:17.634 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:17.634 Initialization complete. Launching workers. 00:09:17.634 ======================================================== 00:09:17.634 Latency(us) 00:09:17.634 Device Information : IOPS MiB/s Average min max 00:09:17.634 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 13089.10 51.13 9783.11 1449.51 52337.81 00:09:17.634 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12979.50 50.70 9863.13 3547.76 51220.93 00:09:17.634 ======================================================== 00:09:17.634 Total : 26068.59 101.83 9822.95 1449.51 52337.81 00:09:17.634 00:09:17.634 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:17.634 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0db33f16-d428-4b6a-aef0-586383451af6 00:09:17.894 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5846242c-6b02-407c-a94a-77a1243c2ce4 00:09:18.156 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:18.156 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:18.156 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:18.156 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:18.156 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:18.156 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:18.156 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:18.156 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:18.156 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:18.156 rmmod nvme_tcp 00:09:18.156 rmmod nvme_fabrics 00:09:18.156 rmmod nvme_keyring 00:09:18.156 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:18.156 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:18.156 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:18.156 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 200036 ']' 00:09:18.156 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 200036 00:09:18.156 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 200036 ']' 00:09:18.156 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 200036 00:09:18.156 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:18.156 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:18.156 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 200036 00:09:18.156 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:18.156 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:18.156 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 200036' 00:09:18.156 killing process with pid 200036 00:09:18.156 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 200036 00:09:18.156 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 200036 00:09:18.416 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:18.416 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:18.416 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:18.416 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:18.416 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:09:18.416 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:18.416 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:09:18.416 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:18.416 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:18.416 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.416 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:18.416 20:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.956 20:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:20.956 00:09:20.956 real 0m22.432s 00:09:20.956 user 1m4.383s 00:09:20.956 sys 0m7.703s 00:09:20.956 20:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.956 20:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:20.956 ************************************ 00:09:20.956 END TEST nvmf_lvol 00:09:20.956 ************************************ 00:09:20.956 20:29:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:20.956 20:29:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:20.956 20:29:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.956 20:29:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:20.956 ************************************ 00:09:20.956 START TEST nvmf_lvs_grow 00:09:20.956 ************************************ 00:09:20.956 20:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:20.956 * Looking for test storage... 00:09:20.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:20.956 20:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:20.956 20:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:09:20.956 20:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:20.956 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:20.956 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.956 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.956 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.956 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.956 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.956 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.956 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.956 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:20.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.957 --rc genhtml_branch_coverage=1 00:09:20.957 --rc genhtml_function_coverage=1 00:09:20.957 --rc genhtml_legend=1 00:09:20.957 --rc geninfo_all_blocks=1 00:09:20.957 --rc geninfo_unexecuted_blocks=1 00:09:20.957 00:09:20.957 ' 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:20.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.957 --rc genhtml_branch_coverage=1 00:09:20.957 --rc genhtml_function_coverage=1 00:09:20.957 --rc genhtml_legend=1 00:09:20.957 --rc geninfo_all_blocks=1 00:09:20.957 --rc geninfo_unexecuted_blocks=1 00:09:20.957 00:09:20.957 ' 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:20.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.957 --rc genhtml_branch_coverage=1 00:09:20.957 --rc genhtml_function_coverage=1 00:09:20.957 --rc genhtml_legend=1 00:09:20.957 --rc geninfo_all_blocks=1 00:09:20.957 --rc geninfo_unexecuted_blocks=1 00:09:20.957 00:09:20.957 ' 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:20.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.957 --rc genhtml_branch_coverage=1 00:09:20.957 --rc genhtml_function_coverage=1 00:09:20.957 --rc genhtml_legend=1 00:09:20.957 --rc geninfo_all_blocks=1 00:09:20.957 --rc geninfo_unexecuted_blocks=1 00:09:20.957 00:09:20.957 ' 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.957 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.958 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.958 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:20.958 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.958 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:20.958 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:20.958 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:20.958 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.958 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.958 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.958 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:20.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:20.958 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:20.958 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:20.958 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:20.958 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:20.958 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:20.958 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:20.958 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:20.958 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.958 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:20.958 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:20.958 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:20.958 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.958 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:20.958 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.958 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:20.958 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:20.958 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:20.958 20:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:27.534 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:27.534 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:27.534 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:27.534 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:27.534 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:27.534 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:27.534 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:27.534 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:27.534 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:27.534 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:27.534 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:27.534 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:27.534 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:27.534 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:27.534 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:27.534 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:27.534 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:27.534 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:27.534 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:27.534 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:27.534 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:27.534 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:27.535 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:27.535 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:27.535 Found net devices under 0000:af:00.0: cvl_0_0 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:27.535 Found net devices under 0000:af:00.1: cvl_0_1 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:27.535 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:27.535 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:27.535 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:27.535 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:27.535 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:27.535 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:27.535 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:27.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:27.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.498 ms 00:09:27.535 00:09:27.535 --- 10.0.0.2 ping statistics --- 00:09:27.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.535 rtt min/avg/max/mdev = 0.498/0.498/0.498/0.000 ms 00:09:27.535 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:27.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:27.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:09:27.535 00:09:27.535 --- 10.0.0.1 ping statistics --- 00:09:27.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.535 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:09:27.535 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:27.535 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:09:27.535 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:27.535 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:27.535 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:27.535 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:27.535 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:27.535 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:27.535 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:27.535 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:27.535 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:27.535 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:27.535 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:27.535 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=206359 00:09:27.535 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 206359 00:09:27.535 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:27.535 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 206359 ']' 00:09:27.535 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.535 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.535 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.536 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.536 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:27.536 [2024-12-05 20:29:20.217979] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:09:27.536 [2024-12-05 20:29:20.218026] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.536 [2024-12-05 20:29:20.293326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.536 [2024-12-05 20:29:20.332010] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.536 [2024-12-05 20:29:20.332044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.536 [2024-12-05 20:29:20.332051] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.536 [2024-12-05 20:29:20.332056] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.536 [2024-12-05 20:29:20.332064] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.536 [2024-12-05 20:29:20.332580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.536 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.536 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:27.536 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:27.536 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:27.536 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:27.536 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:27.536 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:27.536 [2024-12-05 20:29:20.620692] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:27.536 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:27.536 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:27.536 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.536 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:27.536 ************************************ 00:09:27.536 START TEST lvs_grow_clean 00:09:27.536 ************************************ 00:09:27.536 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:27.536 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:27.536 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:27.536 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:27.536 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:27.536 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:27.536 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:27.536 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:27.536 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:27.536 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:27.536 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:27.536 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:27.796 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=9dd4ab6c-057e-4801-a75f-4cca4c0fa418 00:09:27.796 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9dd4ab6c-057e-4801-a75f-4cca4c0fa418 00:09:27.796 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:28.055 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:28.055 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:28.055 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9dd4ab6c-057e-4801-a75f-4cca4c0fa418 lvol 150 00:09:28.055 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c2145348-83f9-450f-82ca-391d6b1c2d2f 00:09:28.055 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:28.055 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:28.313 [2024-12-05 20:29:21.605284] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:28.313 [2024-12-05 20:29:21.605326] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:28.313 true 00:09:28.313 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:28.314 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9dd4ab6c-057e-4801-a75f-4cca4c0fa418 00:09:28.573 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:28.573 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:28.573 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c2145348-83f9-450f-82ca-391d6b1c2d2f 00:09:28.833 20:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:29.093 [2024-12-05 20:29:22.275280] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:29.093 20:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:29.093 20:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=206900 00:09:29.093 20:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:29.093 20:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:29.093 20:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 206900 /var/tmp/bdevperf.sock 00:09:29.093 20:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 206900 ']' 00:09:29.093 20:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:29.093 20:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.093 20:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:29.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:29.093 20:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.093 20:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:29.093 [2024-12-05 20:29:22.491196] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:09:29.093 [2024-12-05 20:29:22.491242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid206900 ] 00:09:29.353 [2024-12-05 20:29:22.564458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.353 [2024-12-05 20:29:22.603701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.353 20:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.353 20:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:29.353 20:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:29.613 Nvme0n1 00:09:29.613 20:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:29.872 [ 00:09:29.872 { 00:09:29.872 "name": "Nvme0n1", 00:09:29.872 "aliases": [ 00:09:29.872 "c2145348-83f9-450f-82ca-391d6b1c2d2f" 00:09:29.872 ], 00:09:29.872 "product_name": "NVMe disk", 00:09:29.872 "block_size": 4096, 00:09:29.872 "num_blocks": 38912, 00:09:29.872 "uuid": "c2145348-83f9-450f-82ca-391d6b1c2d2f", 00:09:29.872 "numa_id": 1, 00:09:29.872 "assigned_rate_limits": { 00:09:29.872 "rw_ios_per_sec": 0, 00:09:29.872 "rw_mbytes_per_sec": 0, 00:09:29.872 "r_mbytes_per_sec": 0, 00:09:29.872 "w_mbytes_per_sec": 0 00:09:29.872 }, 00:09:29.872 "claimed": false, 00:09:29.872 "zoned": false, 00:09:29.872 "supported_io_types": { 00:09:29.872 "read": true, 00:09:29.872 "write": true, 00:09:29.872 "unmap": true, 00:09:29.872 "flush": true, 00:09:29.872 "reset": true, 00:09:29.872 "nvme_admin": true, 00:09:29.872 "nvme_io": true, 00:09:29.872 "nvme_io_md": false, 00:09:29.872 "write_zeroes": true, 00:09:29.872 "zcopy": false, 00:09:29.872 "get_zone_info": false, 00:09:29.872 "zone_management": false, 00:09:29.872 "zone_append": false, 00:09:29.872 "compare": true, 00:09:29.872 "compare_and_write": true, 00:09:29.872 "abort": true, 00:09:29.872 "seek_hole": false, 00:09:29.872 "seek_data": false, 00:09:29.872 "copy": true, 00:09:29.872 "nvme_iov_md": false 00:09:29.872 }, 00:09:29.872 "memory_domains": [ 00:09:29.872 { 00:09:29.872 "dma_device_id": "system", 00:09:29.872 "dma_device_type": 1 00:09:29.872 } 00:09:29.872 ], 00:09:29.872 "driver_specific": { 00:09:29.872 "nvme": [ 00:09:29.872 { 00:09:29.872 "trid": { 00:09:29.872 "trtype": "TCP", 00:09:29.872 "adrfam": "IPv4", 00:09:29.872 "traddr": "10.0.0.2", 00:09:29.872 "trsvcid": "4420", 00:09:29.872 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:29.872 }, 00:09:29.872 "ctrlr_data": { 00:09:29.872 "cntlid": 1, 00:09:29.872 "vendor_id": "0x8086", 00:09:29.872 "model_number": "SPDK bdev Controller", 00:09:29.872 "serial_number": "SPDK0", 00:09:29.872 "firmware_revision": "25.01", 00:09:29.872 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:29.872 "oacs": { 00:09:29.872 "security": 0, 00:09:29.872 "format": 0, 00:09:29.872 "firmware": 0, 00:09:29.872 "ns_manage": 0 00:09:29.872 }, 00:09:29.872 "multi_ctrlr": true, 00:09:29.872 "ana_reporting": false 00:09:29.872 }, 00:09:29.872 "vs": { 00:09:29.872 "nvme_version": "1.3" 00:09:29.872 }, 00:09:29.872 "ns_data": { 00:09:29.872 "id": 1, 00:09:29.872 "can_share": true 00:09:29.872 } 00:09:29.872 } 00:09:29.872 ], 00:09:29.872 "mp_policy": "active_passive" 00:09:29.872 } 00:09:29.872 } 00:09:29.872 ] 00:09:29.872 20:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=206935 00:09:29.872 20:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:29.872 20:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:29.872 Running I/O for 10 seconds... 00:09:31.248 Latency(us) 00:09:31.248 [2024-12-05T19:29:24.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.248 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.248 Nvme0n1 : 1.00 24358.00 95.15 0.00 0.00 0.00 0.00 0.00 00:09:31.248 [2024-12-05T19:29:24.689Z] =================================================================================================================== 00:09:31.248 [2024-12-05T19:29:24.689Z] Total : 24358.00 95.15 0.00 0.00 0.00 0.00 0.00 00:09:31.248 00:09:31.814 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9dd4ab6c-057e-4801-a75f-4cca4c0fa418 00:09:32.073 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.073 Nvme0n1 : 2.00 24487.00 95.65 0.00 0.00 0.00 0.00 0.00 00:09:32.073 [2024-12-05T19:29:25.514Z] =================================================================================================================== 00:09:32.073 [2024-12-05T19:29:25.514Z] Total : 24487.00 95.65 0.00 0.00 0.00 0.00 0.00 00:09:32.073 00:09:32.073 true 00:09:32.073 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9dd4ab6c-057e-4801-a75f-4cca4c0fa418 00:09:32.073 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:32.331 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:32.331 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:32.331 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 206935 00:09:32.897 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.897 Nvme0n1 : 3.00 24532.67 95.83 0.00 0.00 0.00 0.00 0.00 00:09:32.897 [2024-12-05T19:29:26.338Z] =================================================================================================================== 00:09:32.897 [2024-12-05T19:29:26.338Z] Total : 24532.67 95.83 0.00 0.00 0.00 0.00 0.00 00:09:32.897 00:09:33.838 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.838 Nvme0n1 : 4.00 24599.50 96.09 0.00 0.00 0.00 0.00 0.00 00:09:33.838 [2024-12-05T19:29:27.279Z] =================================================================================================================== 00:09:33.838 [2024-12-05T19:29:27.279Z] Total : 24599.50 96.09 0.00 0.00 0.00 0.00 0.00 00:09:33.838 00:09:35.218 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.218 Nvme0n1 : 5.00 24644.40 96.27 0.00 0.00 0.00 0.00 0.00 00:09:35.218 [2024-12-05T19:29:28.659Z] =================================================================================================================== 00:09:35.218 [2024-12-05T19:29:28.659Z] Total : 24644.40 96.27 0.00 0.00 0.00 0.00 0.00 00:09:35.218 00:09:36.156 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.156 Nvme0n1 : 6.00 24678.33 96.40 0.00 0.00 0.00 0.00 0.00 00:09:36.156 [2024-12-05T19:29:29.597Z] =================================================================================================================== 00:09:36.156 [2024-12-05T19:29:29.597Z] Total : 24678.33 96.40 0.00 0.00 0.00 0.00 0.00 00:09:36.156 00:09:37.094 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.095 Nvme0n1 : 7.00 24677.43 96.40 0.00 0.00 0.00 0.00 0.00 00:09:37.095 [2024-12-05T19:29:30.536Z] =================================================================================================================== 00:09:37.095 [2024-12-05T19:29:30.536Z] Total : 24677.43 96.40 0.00 0.00 0.00 0.00 0.00 00:09:37.095 00:09:38.033 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.033 Nvme0n1 : 8.00 24698.75 96.48 0.00 0.00 0.00 0.00 0.00 00:09:38.033 [2024-12-05T19:29:31.474Z] =================================================================================================================== 00:09:38.033 [2024-12-05T19:29:31.474Z] Total : 24698.75 96.48 0.00 0.00 0.00 0.00 0.00 00:09:38.033 00:09:38.973 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.973 Nvme0n1 : 9.00 24730.44 96.60 0.00 0.00 0.00 0.00 0.00 00:09:38.973 [2024-12-05T19:29:32.414Z] =================================================================================================================== 00:09:38.973 [2024-12-05T19:29:32.414Z] Total : 24730.44 96.60 0.00 0.00 0.00 0.00 0.00 00:09:38.973 00:09:39.913 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.913 Nvme0n1 : 10.00 24754.20 96.70 0.00 0.00 0.00 0.00 0.00 00:09:39.913 [2024-12-05T19:29:33.354Z] =================================================================================================================== 00:09:39.913 [2024-12-05T19:29:33.354Z] Total : 24754.20 96.70 0.00 0.00 0.00 0.00 0.00 00:09:39.913 00:09:39.913 00:09:39.913 Latency(us) 00:09:39.913 [2024-12-05T19:29:33.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.913 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.913 Nvme0n1 : 10.01 24754.56 96.70 0.00 0.00 5167.20 3932.16 9234.62 00:09:39.913 [2024-12-05T19:29:33.354Z] =================================================================================================================== 00:09:39.913 [2024-12-05T19:29:33.354Z] Total : 24754.56 96.70 0.00 0.00 5167.20 3932.16 9234.62 00:09:39.913 { 00:09:39.913 "results": [ 00:09:39.913 { 00:09:39.913 "job": "Nvme0n1", 00:09:39.913 "core_mask": "0x2", 00:09:39.913 "workload": "randwrite", 00:09:39.913 "status": "finished", 00:09:39.913 "queue_depth": 128, 00:09:39.913 "io_size": 4096, 00:09:39.913 "runtime": 10.005025, 00:09:39.913 "iops": 24754.560833181327, 00:09:39.913 "mibps": 96.69750325461456, 00:09:39.913 "io_failed": 0, 00:09:39.913 "io_timeout": 0, 00:09:39.913 "avg_latency_us": 5167.204499242026, 00:09:39.913 "min_latency_us": 3932.16, 00:09:39.913 "max_latency_us": 9234.618181818181 00:09:39.913 } 00:09:39.913 ], 00:09:39.913 "core_count": 1 00:09:39.913 } 00:09:39.913 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 206900 00:09:39.913 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 206900 ']' 00:09:39.913 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 206900 00:09:39.913 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:39.913 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:39.913 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 206900 00:09:40.173 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:40.173 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:40.173 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 206900' 00:09:40.173 killing process with pid 206900 00:09:40.173 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 206900 00:09:40.173 Received shutdown signal, test time was about 10.000000 seconds 00:09:40.173 00:09:40.173 Latency(us) 00:09:40.173 [2024-12-05T19:29:33.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.173 [2024-12-05T19:29:33.614Z] =================================================================================================================== 00:09:40.173 [2024-12-05T19:29:33.614Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:40.173 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 206900 00:09:40.173 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:40.432 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:40.691 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9dd4ab6c-057e-4801-a75f-4cca4c0fa418 00:09:40.691 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:40.691 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:40.691 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:40.691 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:40.951 [2024-12-05 20:29:34.275539] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:40.951 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9dd4ab6c-057e-4801-a75f-4cca4c0fa418 00:09:40.951 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:40.951 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9dd4ab6c-057e-4801-a75f-4cca4c0fa418 00:09:40.951 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:40.951 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:40.951 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:40.951 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:40.951 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:40.951 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:40.951 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:40.951 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:40.951 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9dd4ab6c-057e-4801-a75f-4cca4c0fa418 00:09:41.211 request: 00:09:41.211 { 00:09:41.211 "uuid": "9dd4ab6c-057e-4801-a75f-4cca4c0fa418", 00:09:41.211 "method": "bdev_lvol_get_lvstores", 00:09:41.211 "req_id": 1 00:09:41.211 } 00:09:41.211 Got JSON-RPC error response 00:09:41.211 response: 00:09:41.211 { 00:09:41.211 "code": -19, 00:09:41.211 "message": "No such device" 00:09:41.211 } 00:09:41.211 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:41.211 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:41.211 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:41.211 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:41.211 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:41.470 aio_bdev 00:09:41.471 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c2145348-83f9-450f-82ca-391d6b1c2d2f 00:09:41.471 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=c2145348-83f9-450f-82ca-391d6b1c2d2f 00:09:41.471 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:41.471 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:41.471 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:41.471 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:41.471 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:41.471 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c2145348-83f9-450f-82ca-391d6b1c2d2f -t 2000 00:09:41.731 [ 00:09:41.731 { 00:09:41.731 "name": "c2145348-83f9-450f-82ca-391d6b1c2d2f", 00:09:41.731 "aliases": [ 00:09:41.731 "lvs/lvol" 00:09:41.731 ], 00:09:41.731 "product_name": "Logical Volume", 00:09:41.731 "block_size": 4096, 00:09:41.731 "num_blocks": 38912, 00:09:41.731 "uuid": "c2145348-83f9-450f-82ca-391d6b1c2d2f", 00:09:41.731 "assigned_rate_limits": { 00:09:41.731 "rw_ios_per_sec": 0, 00:09:41.731 "rw_mbytes_per_sec": 0, 00:09:41.731 "r_mbytes_per_sec": 0, 00:09:41.731 "w_mbytes_per_sec": 0 00:09:41.731 }, 00:09:41.731 "claimed": false, 00:09:41.731 "zoned": false, 00:09:41.731 "supported_io_types": { 00:09:41.731 "read": true, 00:09:41.731 "write": true, 00:09:41.731 "unmap": true, 00:09:41.731 "flush": false, 00:09:41.731 "reset": true, 00:09:41.731 "nvme_admin": false, 00:09:41.731 "nvme_io": false, 00:09:41.731 "nvme_io_md": false, 00:09:41.731 "write_zeroes": true, 00:09:41.731 "zcopy": false, 00:09:41.731 "get_zone_info": false, 00:09:41.731 "zone_management": false, 00:09:41.731 "zone_append": false, 00:09:41.731 "compare": false, 00:09:41.731 "compare_and_write": false, 00:09:41.731 "abort": false, 00:09:41.731 "seek_hole": true, 00:09:41.731 "seek_data": true, 00:09:41.731 "copy": false, 00:09:41.731 "nvme_iov_md": false 00:09:41.731 }, 00:09:41.731 "driver_specific": { 00:09:41.731 "lvol": { 00:09:41.731 "lvol_store_uuid": "9dd4ab6c-057e-4801-a75f-4cca4c0fa418", 00:09:41.731 "base_bdev": "aio_bdev", 00:09:41.731 "thin_provision": false, 00:09:41.731 "num_allocated_clusters": 38, 00:09:41.731 "snapshot": false, 00:09:41.731 "clone": false, 00:09:41.731 "esnap_clone": false 00:09:41.731 } 00:09:41.731 } 00:09:41.731 } 00:09:41.731 ] 00:09:41.731 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:41.731 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9dd4ab6c-057e-4801-a75f-4cca4c0fa418 00:09:41.731 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:41.990 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:41.990 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9dd4ab6c-057e-4801-a75f-4cca4c0fa418 00:09:41.990 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:41.990 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:41.990 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c2145348-83f9-450f-82ca-391d6b1c2d2f 00:09:42.249 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9dd4ab6c-057e-4801-a75f-4cca4c0fa418 00:09:42.509 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:42.509 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:42.509 00:09:42.509 real 0m15.236s 00:09:42.509 user 0m14.700s 00:09:42.509 sys 0m1.508s 00:09:42.509 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.509 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:42.509 ************************************ 00:09:42.509 END TEST lvs_grow_clean 00:09:42.509 ************************************ 00:09:42.770 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:42.770 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:42.770 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.770 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:42.770 ************************************ 00:09:42.770 START TEST lvs_grow_dirty 00:09:42.770 ************************************ 00:09:42.770 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:42.770 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:42.770 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:42.770 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:42.770 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:42.770 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:42.770 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:42.770 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:42.770 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:42.770 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:43.030 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:43.030 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:43.030 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=c99655a2-326b-44a0-8f33-83de2ab9801d 00:09:43.030 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c99655a2-326b-44a0-8f33-83de2ab9801d 00:09:43.030 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:43.290 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:43.290 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:43.290 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c99655a2-326b-44a0-8f33-83de2ab9801d lvol 150 00:09:43.551 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f3957af2-36c6-4f84-984f-1b7c588a6fa8 00:09:43.551 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:43.551 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:43.551 [2024-12-05 20:29:36.908845] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:43.551 [2024-12-05 20:29:36.908892] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:43.551 true 00:09:43.551 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c99655a2-326b-44a0-8f33-83de2ab9801d 00:09:43.551 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:43.811 20:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:43.811 20:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:44.070 20:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f3957af2-36c6-4f84-984f-1b7c588a6fa8 00:09:44.070 20:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:44.330 [2024-12-05 20:29:37.606931] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.330 20:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:44.590 20:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=209633 00:09:44.590 20:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:44.590 20:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:44.590 20:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 209633 /var/tmp/bdevperf.sock 00:09:44.590 20:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 209633 ']' 00:09:44.590 20:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:44.590 20:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.590 20:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:44.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:44.590 20:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.590 20:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:44.590 [2024-12-05 20:29:37.839331] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:09:44.590 [2024-12-05 20:29:37.839375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid209633 ] 00:09:44.590 [2024-12-05 20:29:37.913911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.590 [2024-12-05 20:29:37.953093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.528 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.528 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:45.528 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:45.788 Nvme0n1 00:09:45.788 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:45.788 [ 00:09:45.788 { 00:09:45.788 "name": "Nvme0n1", 00:09:45.788 "aliases": [ 00:09:45.788 "f3957af2-36c6-4f84-984f-1b7c588a6fa8" 00:09:45.788 ], 00:09:45.788 "product_name": "NVMe disk", 00:09:45.788 "block_size": 4096, 00:09:45.788 "num_blocks": 38912, 00:09:45.788 "uuid": "f3957af2-36c6-4f84-984f-1b7c588a6fa8", 00:09:45.788 "numa_id": 1, 00:09:45.788 "assigned_rate_limits": { 00:09:45.788 "rw_ios_per_sec": 0, 00:09:45.788 "rw_mbytes_per_sec": 0, 00:09:45.788 "r_mbytes_per_sec": 0, 00:09:45.788 "w_mbytes_per_sec": 0 00:09:45.788 }, 00:09:45.788 "claimed": false, 00:09:45.788 "zoned": false, 00:09:45.788 "supported_io_types": { 00:09:45.788 "read": true, 00:09:45.788 "write": true, 00:09:45.788 "unmap": true, 00:09:45.788 "flush": true, 00:09:45.788 "reset": true, 00:09:45.788 "nvme_admin": true, 00:09:45.788 "nvme_io": true, 00:09:45.788 "nvme_io_md": false, 00:09:45.788 "write_zeroes": true, 00:09:45.788 "zcopy": false, 00:09:45.788 "get_zone_info": false, 00:09:45.788 "zone_management": false, 00:09:45.788 "zone_append": false, 00:09:45.788 "compare": true, 00:09:45.788 "compare_and_write": true, 00:09:45.788 "abort": true, 00:09:45.788 "seek_hole": false, 00:09:45.788 "seek_data": false, 00:09:45.788 "copy": true, 00:09:45.788 "nvme_iov_md": false 00:09:45.788 }, 00:09:45.788 "memory_domains": [ 00:09:45.788 { 00:09:45.789 "dma_device_id": "system", 00:09:45.789 "dma_device_type": 1 00:09:45.789 } 00:09:45.789 ], 00:09:45.789 "driver_specific": { 00:09:45.789 "nvme": [ 00:09:45.789 { 00:09:45.789 "trid": { 00:09:45.789 "trtype": "TCP", 00:09:45.789 "adrfam": "IPv4", 00:09:45.789 "traddr": "10.0.0.2", 00:09:45.789 "trsvcid": "4420", 00:09:45.789 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:45.789 }, 00:09:45.789 "ctrlr_data": { 00:09:45.789 "cntlid": 1, 00:09:45.789 "vendor_id": "0x8086", 00:09:45.789 "model_number": "SPDK bdev Controller", 00:09:45.789 "serial_number": "SPDK0", 00:09:45.789 "firmware_revision": "25.01", 00:09:45.789 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:45.789 "oacs": { 00:09:45.789 "security": 0, 00:09:45.789 "format": 0, 00:09:45.789 "firmware": 0, 00:09:45.789 "ns_manage": 0 00:09:45.789 }, 00:09:45.789 "multi_ctrlr": true, 00:09:45.789 "ana_reporting": false 00:09:45.789 }, 00:09:45.789 "vs": { 00:09:45.789 "nvme_version": "1.3" 00:09:45.789 }, 00:09:45.789 "ns_data": { 00:09:45.789 "id": 1, 00:09:45.789 "can_share": true 00:09:45.789 } 00:09:45.789 } 00:09:45.789 ], 00:09:45.789 "mp_policy": "active_passive" 00:09:45.789 } 00:09:45.789 } 00:09:45.789 ] 00:09:45.789 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:45.789 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=209900 00:09:45.789 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:46.049 Running I/O for 10 seconds... 00:09:46.987 Latency(us) 00:09:46.987 [2024-12-05T19:29:40.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.987 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:46.987 Nvme0n1 : 1.00 25163.00 98.29 0.00 0.00 0.00 0.00 0.00 00:09:46.987 [2024-12-05T19:29:40.428Z] =================================================================================================================== 00:09:46.987 [2024-12-05T19:29:40.428Z] Total : 25163.00 98.29 0.00 0.00 0.00 0.00 0.00 00:09:46.987 00:09:47.924 20:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c99655a2-326b-44a0-8f33-83de2ab9801d 00:09:47.924 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:47.924 Nvme0n1 : 2.00 25474.00 99.51 0.00 0.00 0.00 0.00 0.00 00:09:47.924 [2024-12-05T19:29:41.365Z] =================================================================================================================== 00:09:47.924 [2024-12-05T19:29:41.365Z] Total : 25474.00 99.51 0.00 0.00 0.00 0.00 0.00 00:09:47.924 00:09:48.183 true 00:09:48.183 20:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c99655a2-326b-44a0-8f33-83de2ab9801d 00:09:48.183 20:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:48.183 20:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:48.183 20:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:48.183 20:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 209900 00:09:49.178 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:49.178 Nvme0n1 : 3.00 25601.33 100.01 0.00 0.00 0.00 0.00 0.00 00:09:49.178 [2024-12-05T19:29:42.619Z] =================================================================================================================== 00:09:49.178 [2024-12-05T19:29:42.619Z] Total : 25601.33 100.01 0.00 0.00 0.00 0.00 0.00 00:09:49.178 00:09:50.114 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:50.114 Nvme0n1 : 4.00 25690.00 100.35 0.00 0.00 0.00 0.00 0.00 00:09:50.114 [2024-12-05T19:29:43.555Z] =================================================================================================================== 00:09:50.114 [2024-12-05T19:29:43.555Z] Total : 25690.00 100.35 0.00 0.00 0.00 0.00 0.00 00:09:50.114 00:09:51.051 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:51.051 Nvme0n1 : 5.00 25751.80 100.59 0.00 0.00 0.00 0.00 0.00 00:09:51.051 [2024-12-05T19:29:44.492Z] =================================================================================================================== 00:09:51.051 [2024-12-05T19:29:44.492Z] Total : 25751.80 100.59 0.00 0.00 0.00 0.00 0.00 00:09:51.051 00:09:51.988 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:51.988 Nvme0n1 : 6.00 25795.17 100.76 0.00 0.00 0.00 0.00 0.00 00:09:51.988 [2024-12-05T19:29:45.429Z] =================================================================================================================== 00:09:51.988 [2024-12-05T19:29:45.429Z] Total : 25795.17 100.76 0.00 0.00 0.00 0.00 0.00 00:09:51.988 00:09:52.924 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.924 Nvme0n1 : 7.00 25840.00 100.94 0.00 0.00 0.00 0.00 0.00 00:09:52.924 [2024-12-05T19:29:46.365Z] =================================================================================================================== 00:09:52.924 [2024-12-05T19:29:46.365Z] Total : 25840.00 100.94 0.00 0.00 0.00 0.00 0.00 00:09:52.924 00:09:54.304 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.304 Nvme0n1 : 8.00 25845.12 100.96 0.00 0.00 0.00 0.00 0.00 00:09:54.304 [2024-12-05T19:29:47.745Z] =================================================================================================================== 00:09:54.304 [2024-12-05T19:29:47.745Z] Total : 25845.12 100.96 0.00 0.00 0.00 0.00 0.00 00:09:54.304 00:09:55.243 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.243 Nvme0n1 : 9.00 25875.22 101.08 0.00 0.00 0.00 0.00 0.00 00:09:55.243 [2024-12-05T19:29:48.684Z] =================================================================================================================== 00:09:55.243 [2024-12-05T19:29:48.684Z] Total : 25875.22 101.08 0.00 0.00 0.00 0.00 0.00 00:09:55.243 00:09:56.182 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:56.182 Nvme0n1 : 10.00 25893.60 101.15 0.00 0.00 0.00 0.00 0.00 00:09:56.182 [2024-12-05T19:29:49.623Z] =================================================================================================================== 00:09:56.182 [2024-12-05T19:29:49.623Z] Total : 25893.60 101.15 0.00 0.00 0.00 0.00 0.00 00:09:56.182 00:09:56.182 00:09:56.182 Latency(us) 00:09:56.182 [2024-12-05T19:29:49.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.182 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:56.182 Nvme0n1 : 10.00 25899.88 101.17 0.00 0.00 4938.90 2740.60 13702.98 00:09:56.182 [2024-12-05T19:29:49.623Z] =================================================================================================================== 00:09:56.182 [2024-12-05T19:29:49.623Z] Total : 25899.88 101.17 0.00 0.00 4938.90 2740.60 13702.98 00:09:56.182 { 00:09:56.182 "results": [ 00:09:56.182 { 00:09:56.182 "job": "Nvme0n1", 00:09:56.182 "core_mask": "0x2", 00:09:56.182 "workload": "randwrite", 00:09:56.182 "status": "finished", 00:09:56.182 "queue_depth": 128, 00:09:56.182 "io_size": 4096, 00:09:56.182 "runtime": 10.004951, 00:09:56.182 "iops": 25899.8769709117, 00:09:56.182 "mibps": 101.17139441762383, 00:09:56.182 "io_failed": 0, 00:09:56.182 "io_timeout": 0, 00:09:56.182 "avg_latency_us": 4938.895235491757, 00:09:56.182 "min_latency_us": 2740.5963636363635, 00:09:56.182 "max_latency_us": 13702.981818181817 00:09:56.182 } 00:09:56.182 ], 00:09:56.182 "core_count": 1 00:09:56.182 } 00:09:56.182 20:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 209633 00:09:56.182 20:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 209633 ']' 00:09:56.182 20:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 209633 00:09:56.182 20:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:56.182 20:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.182 20:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 209633 00:09:56.182 20:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:56.182 20:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:56.182 20:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 209633' 00:09:56.182 killing process with pid 209633 00:09:56.182 20:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 209633 00:09:56.182 Received shutdown signal, test time was about 10.000000 seconds 00:09:56.182 00:09:56.182 Latency(us) 00:09:56.182 [2024-12-05T19:29:49.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.182 [2024-12-05T19:29:49.623Z] =================================================================================================================== 00:09:56.182 [2024-12-05T19:29:49.623Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:56.182 20:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 209633 00:09:56.182 20:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:56.442 20:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:56.701 20:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c99655a2-326b-44a0-8f33-83de2ab9801d 00:09:56.701 20:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:56.961 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:56.961 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:56.961 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 206359 00:09:56.961 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 206359 00:09:56.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 206359 Killed "${NVMF_APP[@]}" "$@" 00:09:56.961 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:56.962 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:56.962 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:56.962 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:56.962 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:56.962 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=212004 00:09:56.962 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 212004 00:09:56.962 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:56.962 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 212004 ']' 00:09:56.962 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.962 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.962 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.962 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.962 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:56.962 [2024-12-05 20:29:50.238535] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:09:56.962 [2024-12-05 20:29:50.238581] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.962 [2024-12-05 20:29:50.314681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.962 [2024-12-05 20:29:50.352909] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:56.962 [2024-12-05 20:29:50.352943] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:56.962 [2024-12-05 20:29:50.352952] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:56.962 [2024-12-05 20:29:50.352957] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:56.962 [2024-12-05 20:29:50.352962] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:56.962 [2024-12-05 20:29:50.353500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.220 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.220 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:57.220 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:57.220 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:57.220 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:57.220 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:57.220 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:57.220 [2024-12-05 20:29:50.635449] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:57.220 [2024-12-05 20:29:50.635531] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:57.220 [2024-12-05 20:29:50.635555] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:57.479 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:57.479 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f3957af2-36c6-4f84-984f-1b7c588a6fa8 00:09:57.479 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f3957af2-36c6-4f84-984f-1b7c588a6fa8 00:09:57.479 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.479 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:57.479 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.479 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.479 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:57.479 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f3957af2-36c6-4f84-984f-1b7c588a6fa8 -t 2000 00:09:57.737 [ 00:09:57.737 { 00:09:57.737 "name": "f3957af2-36c6-4f84-984f-1b7c588a6fa8", 00:09:57.737 "aliases": [ 00:09:57.737 "lvs/lvol" 00:09:57.737 ], 00:09:57.737 "product_name": "Logical Volume", 00:09:57.737 "block_size": 4096, 00:09:57.737 "num_blocks": 38912, 00:09:57.737 "uuid": "f3957af2-36c6-4f84-984f-1b7c588a6fa8", 00:09:57.737 "assigned_rate_limits": { 00:09:57.737 "rw_ios_per_sec": 0, 00:09:57.737 "rw_mbytes_per_sec": 0, 00:09:57.737 "r_mbytes_per_sec": 0, 00:09:57.737 "w_mbytes_per_sec": 0 00:09:57.737 }, 00:09:57.737 "claimed": false, 00:09:57.737 "zoned": false, 00:09:57.737 "supported_io_types": { 00:09:57.737 "read": true, 00:09:57.737 "write": true, 00:09:57.737 "unmap": true, 00:09:57.737 "flush": false, 00:09:57.737 "reset": true, 00:09:57.737 "nvme_admin": false, 00:09:57.737 "nvme_io": false, 00:09:57.737 "nvme_io_md": false, 00:09:57.737 "write_zeroes": true, 00:09:57.737 "zcopy": false, 00:09:57.737 "get_zone_info": false, 00:09:57.737 "zone_management": false, 00:09:57.737 "zone_append": false, 00:09:57.737 "compare": false, 00:09:57.737 "compare_and_write": false, 00:09:57.737 "abort": false, 00:09:57.737 "seek_hole": true, 00:09:57.737 "seek_data": true, 00:09:57.737 "copy": false, 00:09:57.737 "nvme_iov_md": false 00:09:57.737 }, 00:09:57.737 "driver_specific": { 00:09:57.737 "lvol": { 00:09:57.737 "lvol_store_uuid": "c99655a2-326b-44a0-8f33-83de2ab9801d", 00:09:57.737 "base_bdev": "aio_bdev", 00:09:57.737 "thin_provision": false, 00:09:57.737 "num_allocated_clusters": 38, 00:09:57.737 "snapshot": false, 00:09:57.737 "clone": false, 00:09:57.737 "esnap_clone": false 00:09:57.737 } 00:09:57.737 } 00:09:57.737 } 00:09:57.737 ] 00:09:57.737 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:57.737 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c99655a2-326b-44a0-8f33-83de2ab9801d 00:09:57.737 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:57.995 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:57.995 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c99655a2-326b-44a0-8f33-83de2ab9801d 00:09:57.995 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:57.995 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:57.995 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:58.254 [2024-12-05 20:29:51.540188] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:58.254 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c99655a2-326b-44a0-8f33-83de2ab9801d 00:09:58.254 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:58.254 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c99655a2-326b-44a0-8f33-83de2ab9801d 00:09:58.254 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:58.254 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:58.254 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:58.254 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:58.254 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:58.254 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:58.254 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:58.254 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:58.254 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c99655a2-326b-44a0-8f33-83de2ab9801d 00:09:58.512 request: 00:09:58.512 { 00:09:58.512 "uuid": "c99655a2-326b-44a0-8f33-83de2ab9801d", 00:09:58.512 "method": "bdev_lvol_get_lvstores", 00:09:58.512 "req_id": 1 00:09:58.512 } 00:09:58.512 Got JSON-RPC error response 00:09:58.512 response: 00:09:58.512 { 00:09:58.512 "code": -19, 00:09:58.512 "message": "No such device" 00:09:58.512 } 00:09:58.512 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:58.512 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:58.512 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:58.512 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:58.513 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:58.513 aio_bdev 00:09:58.513 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f3957af2-36c6-4f84-984f-1b7c588a6fa8 00:09:58.513 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f3957af2-36c6-4f84-984f-1b7c588a6fa8 00:09:58.513 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.513 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:58.513 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.513 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.513 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:58.771 20:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f3957af2-36c6-4f84-984f-1b7c588a6fa8 -t 2000 00:09:59.030 [ 00:09:59.030 { 00:09:59.030 "name": "f3957af2-36c6-4f84-984f-1b7c588a6fa8", 00:09:59.030 "aliases": [ 00:09:59.030 "lvs/lvol" 00:09:59.030 ], 00:09:59.030 "product_name": "Logical Volume", 00:09:59.030 "block_size": 4096, 00:09:59.030 "num_blocks": 38912, 00:09:59.030 "uuid": "f3957af2-36c6-4f84-984f-1b7c588a6fa8", 00:09:59.030 "assigned_rate_limits": { 00:09:59.030 "rw_ios_per_sec": 0, 00:09:59.030 "rw_mbytes_per_sec": 0, 00:09:59.030 "r_mbytes_per_sec": 0, 00:09:59.030 "w_mbytes_per_sec": 0 00:09:59.030 }, 00:09:59.030 "claimed": false, 00:09:59.030 "zoned": false, 00:09:59.030 "supported_io_types": { 00:09:59.030 "read": true, 00:09:59.030 "write": true, 00:09:59.030 "unmap": true, 00:09:59.030 "flush": false, 00:09:59.030 "reset": true, 00:09:59.030 "nvme_admin": false, 00:09:59.030 "nvme_io": false, 00:09:59.030 "nvme_io_md": false, 00:09:59.030 "write_zeroes": true, 00:09:59.030 "zcopy": false, 00:09:59.030 "get_zone_info": false, 00:09:59.030 "zone_management": false, 00:09:59.030 "zone_append": false, 00:09:59.030 "compare": false, 00:09:59.030 "compare_and_write": false, 00:09:59.030 "abort": false, 00:09:59.030 "seek_hole": true, 00:09:59.030 "seek_data": true, 00:09:59.030 "copy": false, 00:09:59.030 "nvme_iov_md": false 00:09:59.030 }, 00:09:59.030 "driver_specific": { 00:09:59.030 "lvol": { 00:09:59.030 "lvol_store_uuid": "c99655a2-326b-44a0-8f33-83de2ab9801d", 00:09:59.030 "base_bdev": "aio_bdev", 00:09:59.030 "thin_provision": false, 00:09:59.030 "num_allocated_clusters": 38, 00:09:59.030 "snapshot": false, 00:09:59.030 "clone": false, 00:09:59.030 "esnap_clone": false 00:09:59.030 } 00:09:59.030 } 00:09:59.030 } 00:09:59.030 ] 00:09:59.030 20:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:59.030 20:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:59.030 20:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c99655a2-326b-44a0-8f33-83de2ab9801d 00:09:59.030 20:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:59.030 20:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c99655a2-326b-44a0-8f33-83de2ab9801d 00:09:59.030 20:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:59.290 20:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:59.290 20:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f3957af2-36c6-4f84-984f-1b7c588a6fa8 00:09:59.549 20:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c99655a2-326b-44a0-8f33-83de2ab9801d 00:09:59.549 20:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:59.808 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:59.808 00:09:59.808 real 0m17.185s 00:09:59.808 user 0m43.956s 00:09:59.808 sys 0m4.015s 00:09:59.808 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.808 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:59.808 ************************************ 00:09:59.808 END TEST lvs_grow_dirty 00:09:59.808 ************************************ 00:09:59.808 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:59.808 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:59.808 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:59.808 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:59.808 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:59.808 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:59.808 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:59.808 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:59.808 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:59.808 nvmf_trace.0 00:10:00.068 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:10:00.068 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:00.068 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:00.068 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:00.068 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:00.068 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:00.068 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:00.068 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:00.068 rmmod nvme_tcp 00:10:00.068 rmmod nvme_fabrics 00:10:00.068 rmmod nvme_keyring 00:10:00.068 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:00.068 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:00.068 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:00.068 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 212004 ']' 00:10:00.068 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 212004 00:10:00.068 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 212004 ']' 00:10:00.068 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 212004 00:10:00.068 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:10:00.068 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:00.068 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 212004 00:10:00.068 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:00.068 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:00.068 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 212004' 00:10:00.068 killing process with pid 212004 00:10:00.068 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 212004 00:10:00.068 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 212004 00:10:00.328 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:00.328 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:00.328 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:00.328 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:10:00.328 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:10:00.328 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:00.328 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:10:00.328 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:00.328 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:00.328 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.328 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.328 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.238 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:02.238 00:10:02.238 real 0m41.710s 00:10:02.238 user 1m3.981s 00:10:02.238 sys 0m10.550s 00:10:02.238 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.238 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:02.239 ************************************ 00:10:02.239 END TEST nvmf_lvs_grow 00:10:02.239 ************************************ 00:10:02.239 20:29:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:02.239 20:29:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:02.239 20:29:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.239 20:29:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:02.239 ************************************ 00:10:02.239 START TEST nvmf_bdev_io_wait 00:10:02.239 ************************************ 00:10:02.239 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:02.499 * Looking for test storage... 00:10:02.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:02.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.499 --rc genhtml_branch_coverage=1 00:10:02.499 --rc genhtml_function_coverage=1 00:10:02.499 --rc genhtml_legend=1 00:10:02.499 --rc geninfo_all_blocks=1 00:10:02.499 --rc geninfo_unexecuted_blocks=1 00:10:02.499 00:10:02.499 ' 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:02.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.499 --rc genhtml_branch_coverage=1 00:10:02.499 --rc genhtml_function_coverage=1 00:10:02.499 --rc genhtml_legend=1 00:10:02.499 --rc geninfo_all_blocks=1 00:10:02.499 --rc geninfo_unexecuted_blocks=1 00:10:02.499 00:10:02.499 ' 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:02.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.499 --rc genhtml_branch_coverage=1 00:10:02.499 --rc genhtml_function_coverage=1 00:10:02.499 --rc genhtml_legend=1 00:10:02.499 --rc geninfo_all_blocks=1 00:10:02.499 --rc geninfo_unexecuted_blocks=1 00:10:02.499 00:10:02.499 ' 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:02.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.499 --rc genhtml_branch_coverage=1 00:10:02.499 --rc genhtml_function_coverage=1 00:10:02.499 --rc genhtml_legend=1 00:10:02.499 --rc geninfo_all_blocks=1 00:10:02.499 --rc geninfo_unexecuted_blocks=1 00:10:02.499 00:10:02.499 ' 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:02.499 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.500 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.500 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.500 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:02.500 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.500 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:02.500 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:02.500 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:02.500 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:02.500 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:02.500 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:02.500 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:02.500 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:02.500 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:02.500 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:02.500 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:02.500 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:02.500 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:02.500 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:02.500 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:02.500 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:02.500 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:02.500 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:02.500 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:02.500 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.500 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:02.500 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.500 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:02.500 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:02.500 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:10:02.500 20:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:09.075 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:09.075 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:09.075 Found net devices under 0000:af:00.0: cvl_0_0 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:09.075 Found net devices under 0000:af:00.1: cvl_0_1 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:09.075 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:09.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:09.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:10:09.076 00:10:09.076 --- 10.0.0.2 ping statistics --- 00:10:09.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.076 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:09.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:09.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:10:09.076 00:10:09.076 --- 10.0.0.1 ping statistics --- 00:10:09.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.076 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=216441 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 216441 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 216441 ']' 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:09.076 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:09.076 [2024-12-05 20:30:01.968739] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:10:09.076 [2024-12-05 20:30:01.968817] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:09.076 [2024-12-05 20:30:02.042966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:09.076 [2024-12-05 20:30:02.085365] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:09.076 [2024-12-05 20:30:02.085400] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:09.076 [2024-12-05 20:30:02.085407] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:09.076 [2024-12-05 20:30:02.085412] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:09.076 [2024-12-05 20:30:02.085418] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:09.076 [2024-12-05 20:30:02.086820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.076 [2024-12-05 20:30:02.086934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.076 [2024-12-05 20:30:02.087027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:09.076 [2024-12-05 20:30:02.087028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:09.644 [2024-12-05 20:30:02.891472] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:09.644 Malloc0 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:09.644 [2024-12-05 20:30:02.945997] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=216692 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=216695 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:09.644 { 00:10:09.644 "params": { 00:10:09.644 "name": "Nvme$subsystem", 00:10:09.644 "trtype": "$TEST_TRANSPORT", 00:10:09.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:09.644 "adrfam": "ipv4", 00:10:09.644 "trsvcid": "$NVMF_PORT", 00:10:09.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:09.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:09.644 "hdgst": ${hdgst:-false}, 00:10:09.644 "ddgst": ${ddgst:-false} 00:10:09.644 }, 00:10:09.644 "method": "bdev_nvme_attach_controller" 00:10:09.644 } 00:10:09.644 EOF 00:10:09.644 )") 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=216698 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:09.644 { 00:10:09.644 "params": { 00:10:09.644 "name": "Nvme$subsystem", 00:10:09.644 "trtype": "$TEST_TRANSPORT", 00:10:09.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:09.644 "adrfam": "ipv4", 00:10:09.644 "trsvcid": "$NVMF_PORT", 00:10:09.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:09.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:09.644 "hdgst": ${hdgst:-false}, 00:10:09.644 "ddgst": ${ddgst:-false} 00:10:09.644 }, 00:10:09.644 "method": "bdev_nvme_attach_controller" 00:10:09.644 } 00:10:09.644 EOF 00:10:09.644 )") 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=216702 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:09.644 { 00:10:09.644 "params": { 00:10:09.644 "name": "Nvme$subsystem", 00:10:09.644 "trtype": "$TEST_TRANSPORT", 00:10:09.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:09.644 "adrfam": "ipv4", 00:10:09.644 "trsvcid": "$NVMF_PORT", 00:10:09.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:09.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:09.644 "hdgst": ${hdgst:-false}, 00:10:09.644 "ddgst": ${ddgst:-false} 00:10:09.644 }, 00:10:09.644 "method": "bdev_nvme_attach_controller" 00:10:09.644 } 00:10:09.644 EOF 00:10:09.644 )") 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:09.644 { 00:10:09.644 "params": { 00:10:09.644 "name": "Nvme$subsystem", 00:10:09.644 "trtype": "$TEST_TRANSPORT", 00:10:09.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:09.644 "adrfam": "ipv4", 00:10:09.644 "trsvcid": "$NVMF_PORT", 00:10:09.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:09.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:09.644 "hdgst": ${hdgst:-false}, 00:10:09.644 "ddgst": ${ddgst:-false} 00:10:09.644 }, 00:10:09.644 "method": "bdev_nvme_attach_controller" 00:10:09.644 } 00:10:09.644 EOF 00:10:09.644 )") 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 216692 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:09.644 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:09.645 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:09.645 "params": { 00:10:09.645 "name": "Nvme1", 00:10:09.645 "trtype": "tcp", 00:10:09.645 "traddr": "10.0.0.2", 00:10:09.645 "adrfam": "ipv4", 00:10:09.645 "trsvcid": "4420", 00:10:09.645 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:09.645 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:09.645 "hdgst": false, 00:10:09.645 "ddgst": false 00:10:09.645 }, 00:10:09.645 "method": "bdev_nvme_attach_controller" 00:10:09.645 }' 00:10:09.645 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:09.645 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:09.645 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:09.645 "params": { 00:10:09.645 "name": "Nvme1", 00:10:09.645 "trtype": "tcp", 00:10:09.645 "traddr": "10.0.0.2", 00:10:09.645 "adrfam": "ipv4", 00:10:09.645 "trsvcid": "4420", 00:10:09.645 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:09.645 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:09.645 "hdgst": false, 00:10:09.645 "ddgst": false 00:10:09.645 }, 00:10:09.645 "method": "bdev_nvme_attach_controller" 00:10:09.645 }' 00:10:09.645 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:09.645 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:09.645 "params": { 00:10:09.645 "name": "Nvme1", 00:10:09.645 "trtype": "tcp", 00:10:09.645 "traddr": "10.0.0.2", 00:10:09.645 "adrfam": "ipv4", 00:10:09.645 "trsvcid": "4420", 00:10:09.645 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:09.645 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:09.645 "hdgst": false, 00:10:09.645 "ddgst": false 00:10:09.645 }, 00:10:09.645 "method": "bdev_nvme_attach_controller" 00:10:09.645 }' 00:10:09.645 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:09.645 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:09.645 "params": { 00:10:09.645 "name": "Nvme1", 00:10:09.645 "trtype": "tcp", 00:10:09.645 "traddr": "10.0.0.2", 00:10:09.645 "adrfam": "ipv4", 00:10:09.645 "trsvcid": "4420", 00:10:09.645 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:09.645 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:09.645 "hdgst": false, 00:10:09.645 "ddgst": false 00:10:09.645 }, 00:10:09.645 "method": "bdev_nvme_attach_controller" 00:10:09.645 }' 00:10:09.645 [2024-12-05 20:30:02.996814] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:10:09.645 [2024-12-05 20:30:02.996861] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:09.645 [2024-12-05 20:30:02.996915] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:10:09.645 [2024-12-05 20:30:02.996962] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:09.645 [2024-12-05 20:30:02.997792] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:10:09.645 [2024-12-05 20:30:02.997826] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:09.645 [2024-12-05 20:30:03.000936] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:10:09.645 [2024-12-05 20:30:03.000981] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:09.904 [2024-12-05 20:30:03.178233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.904 [2024-12-05 20:30:03.218705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:09.904 [2024-12-05 20:30:03.264934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.904 [2024-12-05 20:30:03.305574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:10.163 [2024-12-05 20:30:03.352119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.163 [2024-12-05 20:30:03.411760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:10.163 [2024-12-05 20:30:03.412923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.163 [2024-12-05 20:30:03.452692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:10.163 Running I/O for 1 seconds... 00:10:10.422 Running I/O for 1 seconds... 00:10:10.422 Running I/O for 1 seconds... 00:10:10.422 Running I/O for 1 seconds... 00:10:11.363 265144.00 IOPS, 1035.72 MiB/s 00:10:11.363 Latency(us) 00:10:11.363 [2024-12-05T19:30:04.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:11.363 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:11.363 Nvme1n1 : 1.00 264770.99 1034.26 0.00 0.00 480.87 203.87 1392.64 00:10:11.363 [2024-12-05T19:30:04.804Z] =================================================================================================================== 00:10:11.363 [2024-12-05T19:30:04.804Z] Total : 264770.99 1034.26 0.00 0.00 480.87 203.87 1392.64 00:10:11.363 10789.00 IOPS, 42.14 MiB/s [2024-12-05T19:30:04.804Z] 14444.00 IOPS, 56.42 MiB/s 00:10:11.363 Latency(us) 00:10:11.363 [2024-12-05T19:30:04.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:11.363 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:11.363 Nvme1n1 : 1.01 10846.13 42.37 0.00 0.00 11758.20 5719.51 20852.36 00:10:11.363 [2024-12-05T19:30:04.804Z] =================================================================================================================== 00:10:11.363 [2024-12-05T19:30:04.804Z] Total : 10846.13 42.37 0.00 0.00 11758.20 5719.51 20852.36 00:10:11.363 00:10:11.363 Latency(us) 00:10:11.363 [2024-12-05T19:30:04.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:11.363 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:11.363 Nvme1n1 : 1.01 14505.68 56.66 0.00 0.00 8800.38 4140.68 16562.73 00:10:11.363 [2024-12-05T19:30:04.804Z] =================================================================================================================== 00:10:11.363 [2024-12-05T19:30:04.804Z] Total : 14505.68 56.66 0.00 0.00 8800.38 4140.68 16562.73 00:10:11.363 10888.00 IOPS, 42.53 MiB/s 00:10:11.363 Latency(us) 00:10:11.363 [2024-12-05T19:30:04.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:11.363 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:11.363 Nvme1n1 : 1.01 10959.42 42.81 0.00 0.00 11645.90 4438.57 22163.08 00:10:11.363 [2024-12-05T19:30:04.804Z] =================================================================================================================== 00:10:11.363 [2024-12-05T19:30:04.804Z] Total : 10959.42 42.81 0.00 0.00 11645.90 4438.57 22163.08 00:10:11.363 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 216695 00:10:11.623 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 216698 00:10:11.623 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 216702 00:10:11.623 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:11.623 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.623 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:11.623 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.623 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:11.623 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:11.623 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:11.623 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:11.623 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:11.623 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:11.623 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:11.623 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:11.623 rmmod nvme_tcp 00:10:11.623 rmmod nvme_fabrics 00:10:11.623 rmmod nvme_keyring 00:10:11.623 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:11.623 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:11.623 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:11.623 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 216441 ']' 00:10:11.623 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 216441 00:10:11.623 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 216441 ']' 00:10:11.623 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 216441 00:10:11.623 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:11.624 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.624 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 216441 00:10:11.624 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:11.624 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:11.624 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 216441' 00:10:11.624 killing process with pid 216441 00:10:11.624 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 216441 00:10:11.624 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 216441 00:10:11.883 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:11.883 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:11.883 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:11.883 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:11.883 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:10:11.883 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:11.883 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:10:11.883 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:11.883 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:11.883 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.883 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.883 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.792 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:13.792 00:10:13.792 real 0m11.546s 00:10:13.792 user 0m19.148s 00:10:13.792 sys 0m6.310s 00:10:13.792 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.792 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:13.792 ************************************ 00:10:13.792 END TEST nvmf_bdev_io_wait 00:10:13.792 ************************************ 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:14.053 ************************************ 00:10:14.053 START TEST nvmf_queue_depth 00:10:14.053 ************************************ 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:14.053 * Looking for test storage... 00:10:14.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:14.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.053 --rc genhtml_branch_coverage=1 00:10:14.053 --rc genhtml_function_coverage=1 00:10:14.053 --rc genhtml_legend=1 00:10:14.053 --rc geninfo_all_blocks=1 00:10:14.053 --rc geninfo_unexecuted_blocks=1 00:10:14.053 00:10:14.053 ' 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:14.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.053 --rc genhtml_branch_coverage=1 00:10:14.053 --rc genhtml_function_coverage=1 00:10:14.053 --rc genhtml_legend=1 00:10:14.053 --rc geninfo_all_blocks=1 00:10:14.053 --rc geninfo_unexecuted_blocks=1 00:10:14.053 00:10:14.053 ' 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:14.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.053 --rc genhtml_branch_coverage=1 00:10:14.053 --rc genhtml_function_coverage=1 00:10:14.053 --rc genhtml_legend=1 00:10:14.053 --rc geninfo_all_blocks=1 00:10:14.053 --rc geninfo_unexecuted_blocks=1 00:10:14.053 00:10:14.053 ' 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:14.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.053 --rc genhtml_branch_coverage=1 00:10:14.053 --rc genhtml_function_coverage=1 00:10:14.053 --rc genhtml_legend=1 00:10:14.053 --rc geninfo_all_blocks=1 00:10:14.053 --rc geninfo_unexecuted_blocks=1 00:10:14.053 00:10:14.053 ' 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:14.053 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:14.342 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.342 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.342 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.342 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.342 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.342 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.342 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:14.342 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.342 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:14.342 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:14.342 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:14.342 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.342 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.342 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.342 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:14.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:14.343 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:14.343 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:14.343 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:14.343 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:14.343 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:14.343 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:14.343 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:14.343 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:14.343 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.343 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:14.343 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:14.343 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:14.343 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.343 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.343 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.343 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:14.343 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:14.343 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:10:14.343 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:20.922 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:20.922 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:20.922 Found net devices under 0000:af:00.0: cvl_0_0 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:20.922 Found net devices under 0000:af:00.1: cvl_0_1 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:20.922 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:20.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:20.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.491 ms 00:10:20.923 00:10:20.923 --- 10.0.0.2 ping statistics --- 00:10:20.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.923 rtt min/avg/max/mdev = 0.491/0.491/0.491/0.000 ms 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:20.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:20.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:10:20.923 00:10:20.923 --- 10.0.0.1 ping statistics --- 00:10:20.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.923 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=221154 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 221154 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 221154 ']' 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:20.923 [2024-12-05 20:30:13.567340] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:10:20.923 [2024-12-05 20:30:13.567384] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.923 [2024-12-05 20:30:13.629200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.923 [2024-12-05 20:30:13.669175] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.923 [2024-12-05 20:30:13.669209] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.923 [2024-12-05 20:30:13.669216] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.923 [2024-12-05 20:30:13.669222] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.923 [2024-12-05 20:30:13.669227] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.923 [2024-12-05 20:30:13.669745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:20.923 [2024-12-05 20:30:13.812768] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:20.923 Malloc0 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:20.923 [2024-12-05 20:30:13.862849] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=221177 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 221177 /var/tmp/bdevperf.sock 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 221177 ']' 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:20.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:20.923 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:20.923 [2024-12-05 20:30:13.914001] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:10:20.923 [2024-12-05 20:30:13.914039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid221177 ] 00:10:20.923 [2024-12-05 20:30:13.985836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.923 [2024-12-05 20:30:14.023593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.492 20:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:21.492 20:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:21.492 20:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:21.492 20:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.492 20:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:21.492 NVMe0n1 00:10:21.492 20:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.492 20:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:21.492 Running I/O for 10 seconds... 00:10:23.808 12882.00 IOPS, 50.32 MiB/s [2024-12-05T19:30:18.188Z] 13201.50 IOPS, 51.57 MiB/s [2024-12-05T19:30:19.125Z] 13301.33 IOPS, 51.96 MiB/s [2024-12-05T19:30:20.104Z] 13348.25 IOPS, 52.14 MiB/s [2024-12-05T19:30:21.039Z] 13492.00 IOPS, 52.70 MiB/s [2024-12-05T19:30:21.976Z] 13476.33 IOPS, 52.64 MiB/s [2024-12-05T19:30:23.357Z] 13496.14 IOPS, 52.72 MiB/s [2024-12-05T19:30:24.298Z] 13538.88 IOPS, 52.89 MiB/s [2024-12-05T19:30:25.260Z] 13577.44 IOPS, 53.04 MiB/s [2024-12-05T19:30:25.260Z] 13608.00 IOPS, 53.16 MiB/s 00:10:31.819 Latency(us) 00:10:31.819 [2024-12-05T19:30:25.260Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:31.819 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:31.819 Verification LBA range: start 0x0 length 0x4000 00:10:31.819 NVMe0n1 : 10.04 13651.02 53.32 0.00 0.00 74789.08 6464.23 48615.80 00:10:31.819 [2024-12-05T19:30:25.261Z] =================================================================================================================== 00:10:31.820 [2024-12-05T19:30:25.261Z] Total : 13651.02 53.32 0.00 0.00 74789.08 6464.23 48615.80 00:10:31.820 { 00:10:31.820 "results": [ 00:10:31.820 { 00:10:31.820 "job": "NVMe0n1", 00:10:31.820 "core_mask": "0x1", 00:10:31.820 "workload": "verify", 00:10:31.820 "status": "finished", 00:10:31.820 "verify_range": { 00:10:31.820 "start": 0, 00:10:31.820 "length": 16384 00:10:31.820 }, 00:10:31.820 "queue_depth": 1024, 00:10:31.820 "io_size": 4096, 00:10:31.820 "runtime": 10.042916, 00:10:31.820 "iops": 13651.015302726817, 00:10:31.820 "mibps": 53.32427852627663, 00:10:31.820 "io_failed": 0, 00:10:31.820 "io_timeout": 0, 00:10:31.820 "avg_latency_us": 74789.07633223169, 00:10:31.820 "min_latency_us": 6464.232727272727, 00:10:31.820 "max_latency_us": 48615.796363636364 00:10:31.820 } 00:10:31.820 ], 00:10:31.820 "core_count": 1 00:10:31.820 } 00:10:31.820 20:30:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 221177 00:10:31.820 20:30:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 221177 ']' 00:10:31.820 20:30:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 221177 00:10:31.820 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:31.820 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:31.820 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 221177 00:10:31.820 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:31.820 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:31.820 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 221177' 00:10:31.820 killing process with pid 221177 00:10:31.820 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 221177 00:10:31.820 Received shutdown signal, test time was about 10.000000 seconds 00:10:31.820 00:10:31.820 Latency(us) 00:10:31.820 [2024-12-05T19:30:25.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:31.820 [2024-12-05T19:30:25.261Z] =================================================================================================================== 00:10:31.820 [2024-12-05T19:30:25.261Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:31.820 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 221177 00:10:31.820 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:31.820 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:31.820 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:31.820 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:31.820 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:31.820 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:31.820 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:31.820 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:31.820 rmmod nvme_tcp 00:10:31.820 rmmod nvme_fabrics 00:10:31.820 rmmod nvme_keyring 00:10:32.079 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:32.079 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:32.079 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:32.079 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 221154 ']' 00:10:32.079 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 221154 00:10:32.079 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 221154 ']' 00:10:32.079 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 221154 00:10:32.079 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:32.079 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:32.079 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 221154 00:10:32.079 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:32.079 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:32.079 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 221154' 00:10:32.079 killing process with pid 221154 00:10:32.079 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 221154 00:10:32.079 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 221154 00:10:32.079 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:32.079 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:32.079 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:32.079 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:32.079 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:32.079 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:32.079 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:32.079 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:32.079 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:32.079 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.079 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.079 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:34.620 00:10:34.620 real 0m20.281s 00:10:34.620 user 0m24.291s 00:10:34.620 sys 0m6.046s 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:34.620 ************************************ 00:10:34.620 END TEST nvmf_queue_depth 00:10:34.620 ************************************ 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:34.620 ************************************ 00:10:34.620 START TEST nvmf_target_multipath 00:10:34.620 ************************************ 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:34.620 * Looking for test storage... 00:10:34.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:34.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.620 --rc genhtml_branch_coverage=1 00:10:34.620 --rc genhtml_function_coverage=1 00:10:34.620 --rc genhtml_legend=1 00:10:34.620 --rc geninfo_all_blocks=1 00:10:34.620 --rc geninfo_unexecuted_blocks=1 00:10:34.620 00:10:34.620 ' 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:34.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.620 --rc genhtml_branch_coverage=1 00:10:34.620 --rc genhtml_function_coverage=1 00:10:34.620 --rc genhtml_legend=1 00:10:34.620 --rc geninfo_all_blocks=1 00:10:34.620 --rc geninfo_unexecuted_blocks=1 00:10:34.620 00:10:34.620 ' 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:34.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.620 --rc genhtml_branch_coverage=1 00:10:34.620 --rc genhtml_function_coverage=1 00:10:34.620 --rc genhtml_legend=1 00:10:34.620 --rc geninfo_all_blocks=1 00:10:34.620 --rc geninfo_unexecuted_blocks=1 00:10:34.620 00:10:34.620 ' 00:10:34.620 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:34.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.620 --rc genhtml_branch_coverage=1 00:10:34.620 --rc genhtml_function_coverage=1 00:10:34.620 --rc genhtml_legend=1 00:10:34.620 --rc geninfo_all_blocks=1 00:10:34.621 --rc geninfo_unexecuted_blocks=1 00:10:34.621 00:10:34.621 ' 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:34.621 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:34.621 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:41.201 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:41.201 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:41.202 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:41.202 Found net devices under 0000:af:00.0: cvl_0_0 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:41.202 Found net devices under 0000:af:00.1: cvl_0_1 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:41.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:41.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.491 ms 00:10:41.202 00:10:41.202 --- 10.0.0.2 ping statistics --- 00:10:41.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.202 rtt min/avg/max/mdev = 0.491/0.491/0.491/0.000 ms 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:41.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:41.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:10:41.202 00:10:41.202 --- 10.0.0.1 ping statistics --- 00:10:41.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.202 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:41.202 only one NIC for nvmf test 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:41.202 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:41.203 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:41.203 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:41.203 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:41.203 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:41.203 rmmod nvme_tcp 00:10:41.203 rmmod nvme_fabrics 00:10:41.203 rmmod nvme_keyring 00:10:41.203 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.203 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:41.203 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:41.203 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:41.203 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:41.203 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:41.203 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:41.203 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:41.203 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:41.203 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:41.203 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:41.203 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:41.203 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:41.203 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.203 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.203 20:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:43.114 00:10:43.114 real 0m8.423s 00:10:43.114 user 0m1.769s 00:10:43.114 sys 0m4.645s 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:43.114 ************************************ 00:10:43.114 END TEST nvmf_target_multipath 00:10:43.114 ************************************ 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:43.114 ************************************ 00:10:43.114 START TEST nvmf_zcopy 00:10:43.114 ************************************ 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:43.114 * Looking for test storage... 00:10:43.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:43.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.114 --rc genhtml_branch_coverage=1 00:10:43.114 --rc genhtml_function_coverage=1 00:10:43.114 --rc genhtml_legend=1 00:10:43.114 --rc geninfo_all_blocks=1 00:10:43.114 --rc geninfo_unexecuted_blocks=1 00:10:43.114 00:10:43.114 ' 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:43.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.114 --rc genhtml_branch_coverage=1 00:10:43.114 --rc genhtml_function_coverage=1 00:10:43.114 --rc genhtml_legend=1 00:10:43.114 --rc geninfo_all_blocks=1 00:10:43.114 --rc geninfo_unexecuted_blocks=1 00:10:43.114 00:10:43.114 ' 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:43.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.114 --rc genhtml_branch_coverage=1 00:10:43.114 --rc genhtml_function_coverage=1 00:10:43.114 --rc genhtml_legend=1 00:10:43.114 --rc geninfo_all_blocks=1 00:10:43.114 --rc geninfo_unexecuted_blocks=1 00:10:43.114 00:10:43.114 ' 00:10:43.114 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:43.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.114 --rc genhtml_branch_coverage=1 00:10:43.114 --rc genhtml_function_coverage=1 00:10:43.114 --rc genhtml_legend=1 00:10:43.114 --rc geninfo_all_blocks=1 00:10:43.115 --rc geninfo_unexecuted_blocks=1 00:10:43.115 00:10:43.115 ' 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:43.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:43.115 20:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:49.691 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:49.691 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:49.691 Found net devices under 0000:af:00.0: cvl_0_0 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:49.691 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:49.692 Found net devices under 0000:af:00.1: cvl_0_1 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:49.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:49.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:10:49.692 00:10:49.692 --- 10.0.0.2 ping statistics --- 00:10:49.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.692 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:49.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:49.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:10:49.692 00:10:49.692 --- 10.0.0.1 ping statistics --- 00:10:49.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.692 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=230731 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 230731 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 230731 ']' 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:49.692 [2024-12-05 20:30:42.406052] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:10:49.692 [2024-12-05 20:30:42.406097] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.692 [2024-12-05 20:30:42.480718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.692 [2024-12-05 20:30:42.516403] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:49.692 [2024-12-05 20:30:42.516433] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:49.692 [2024-12-05 20:30:42.516440] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:49.692 [2024-12-05 20:30:42.516445] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:49.692 [2024-12-05 20:30:42.516450] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:49.692 [2024-12-05 20:30:42.517014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:49.692 [2024-12-05 20:30:42.659576] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:49.692 [2024-12-05 20:30:42.683777] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:49.692 malloc0 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.692 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:49.693 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:49.693 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:49.693 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:49.693 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:49.693 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:49.693 { 00:10:49.693 "params": { 00:10:49.693 "name": "Nvme$subsystem", 00:10:49.693 "trtype": "$TEST_TRANSPORT", 00:10:49.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:49.693 "adrfam": "ipv4", 00:10:49.693 "trsvcid": "$NVMF_PORT", 00:10:49.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:49.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:49.693 "hdgst": ${hdgst:-false}, 00:10:49.693 "ddgst": ${ddgst:-false} 00:10:49.693 }, 00:10:49.693 "method": "bdev_nvme_attach_controller" 00:10:49.693 } 00:10:49.693 EOF 00:10:49.693 )") 00:10:49.693 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:49.693 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:49.693 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:49.693 20:30:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:49.693 "params": { 00:10:49.693 "name": "Nvme1", 00:10:49.693 "trtype": "tcp", 00:10:49.693 "traddr": "10.0.0.2", 00:10:49.693 "adrfam": "ipv4", 00:10:49.693 "trsvcid": "4420", 00:10:49.693 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:49.693 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:49.693 "hdgst": false, 00:10:49.693 "ddgst": false 00:10:49.693 }, 00:10:49.693 "method": "bdev_nvme_attach_controller" 00:10:49.693 }' 00:10:49.693 [2024-12-05 20:30:42.765963] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:10:49.693 [2024-12-05 20:30:42.766002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid230764 ] 00:10:49.693 [2024-12-05 20:30:42.839158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.693 [2024-12-05 20:30:42.876892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.693 Running I/O for 10 seconds... 00:10:52.009 9525.00 IOPS, 74.41 MiB/s [2024-12-05T19:30:46.389Z] 9612.00 IOPS, 75.09 MiB/s [2024-12-05T19:30:47.327Z] 9626.33 IOPS, 75.21 MiB/s [2024-12-05T19:30:48.266Z] 9637.75 IOPS, 75.29 MiB/s [2024-12-05T19:30:49.205Z] 9649.20 IOPS, 75.38 MiB/s [2024-12-05T19:30:50.144Z] 9655.83 IOPS, 75.44 MiB/s [2024-12-05T19:30:51.524Z] 9651.57 IOPS, 75.40 MiB/s [2024-12-05T19:30:52.461Z] 9642.38 IOPS, 75.33 MiB/s [2024-12-05T19:30:53.400Z] 9655.22 IOPS, 75.43 MiB/s [2024-12-05T19:30:53.400Z] 9665.20 IOPS, 75.51 MiB/s 00:10:59.959 Latency(us) 00:10:59.959 [2024-12-05T19:30:53.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:59.959 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:59.959 Verification LBA range: start 0x0 length 0x1000 00:10:59.959 Nvme1n1 : 10.05 9626.52 75.21 0.00 0.00 13205.86 2144.81 40989.79 00:10:59.959 [2024-12-05T19:30:53.400Z] =================================================================================================================== 00:10:59.959 [2024-12-05T19:30:53.400Z] Total : 9626.52 75.21 0.00 0.00 13205.86 2144.81 40989.79 00:10:59.959 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=232607 00:10:59.959 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:59.959 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:59.959 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:59.959 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:59.959 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:59.959 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:59.959 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:59.959 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:59.959 { 00:10:59.959 "params": { 00:10:59.959 "name": "Nvme$subsystem", 00:10:59.959 "trtype": "$TEST_TRANSPORT", 00:10:59.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:59.959 "adrfam": "ipv4", 00:10:59.959 "trsvcid": "$NVMF_PORT", 00:10:59.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:59.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:59.959 "hdgst": ${hdgst:-false}, 00:10:59.959 "ddgst": ${ddgst:-false} 00:10:59.959 }, 00:10:59.959 "method": "bdev_nvme_attach_controller" 00:10:59.959 } 00:10:59.959 EOF 00:10:59.959 )") 00:10:59.959 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:59.959 [2024-12-05 20:30:53.348172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.959 [2024-12-05 20:30:53.348206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.959 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:59.959 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:59.959 20:30:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:59.959 "params": { 00:10:59.959 "name": "Nvme1", 00:10:59.959 "trtype": "tcp", 00:10:59.959 "traddr": "10.0.0.2", 00:10:59.959 "adrfam": "ipv4", 00:10:59.960 "trsvcid": "4420", 00:10:59.960 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:59.960 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:59.960 "hdgst": false, 00:10:59.960 "ddgst": false 00:10:59.960 }, 00:10:59.960 "method": "bdev_nvme_attach_controller" 00:10:59.960 }' 00:10:59.960 [2024-12-05 20:30:53.360162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.960 [2024-12-05 20:30:53.360174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.960 [2024-12-05 20:30:53.372192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.960 [2024-12-05 20:30:53.372202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.960 [2024-12-05 20:30:53.384219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.960 [2024-12-05 20:30:53.384228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.960 [2024-12-05 20:30:53.387819] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:10:59.960 [2024-12-05 20:30:53.387864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid232607 ] 00:10:59.960 [2024-12-05 20:30:53.396249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.960 [2024-12-05 20:30:53.396260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.219 [2024-12-05 20:30:53.408279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.219 [2024-12-05 20:30:53.408287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.219 [2024-12-05 20:30:53.420316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.219 [2024-12-05 20:30:53.420326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.219 [2024-12-05 20:30:53.432347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.219 [2024-12-05 20:30:53.432355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.219 [2024-12-05 20:30:53.444389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.219 [2024-12-05 20:30:53.444420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.219 [2024-12-05 20:30:53.456410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.219 [2024-12-05 20:30:53.456422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.219 [2024-12-05 20:30:53.460317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.219 [2024-12-05 20:30:53.468442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.219 [2024-12-05 20:30:53.468456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.219 [2024-12-05 20:30:53.480474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.219 [2024-12-05 20:30:53.480486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.219 [2024-12-05 20:30:53.492502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.219 [2024-12-05 20:30:53.492513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.219 [2024-12-05 20:30:53.497892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.219 [2024-12-05 20:30:53.504536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.219 [2024-12-05 20:30:53.504547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.219 [2024-12-05 20:30:53.516582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.219 [2024-12-05 20:30:53.516601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.219 [2024-12-05 20:30:53.528603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.219 [2024-12-05 20:30:53.528619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.219 [2024-12-05 20:30:53.540637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.219 [2024-12-05 20:30:53.540651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.219 [2024-12-05 20:30:53.552665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.219 [2024-12-05 20:30:53.552678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.219 [2024-12-05 20:30:53.564697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.219 [2024-12-05 20:30:53.564710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.219 [2024-12-05 20:30:53.576725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.219 [2024-12-05 20:30:53.576735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.219 [2024-12-05 20:30:53.588773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.219 [2024-12-05 20:30:53.588792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.219 [2024-12-05 20:30:53.600801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.219 [2024-12-05 20:30:53.600814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.219 [2024-12-05 20:30:53.612833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.219 [2024-12-05 20:30:53.612848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.219 [2024-12-05 20:30:53.624860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.219 [2024-12-05 20:30:53.624870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.219 [2024-12-05 20:30:53.636890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.219 [2024-12-05 20:30:53.636899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.219 [2024-12-05 20:30:53.648922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.219 [2024-12-05 20:30:53.648931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.477 [2024-12-05 20:30:53.660964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.477 [2024-12-05 20:30:53.660978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.477 [2024-12-05 20:30:53.672994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.477 [2024-12-05 20:30:53.673014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.477 [2024-12-05 20:30:53.685229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.477 [2024-12-05 20:30:53.685247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.477 [2024-12-05 20:30:53.697064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.477 [2024-12-05 20:30:53.697077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.477 Running I/O for 5 seconds... 00:11:00.477 [2024-12-05 20:30:53.712417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.477 [2024-12-05 20:30:53.712435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.477 [2024-12-05 20:30:53.725914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.477 [2024-12-05 20:30:53.725931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.477 [2024-12-05 20:30:53.739714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.477 [2024-12-05 20:30:53.739732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.477 [2024-12-05 20:30:53.752804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.477 [2024-12-05 20:30:53.752821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.477 [2024-12-05 20:30:53.766756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.477 [2024-12-05 20:30:53.766774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.477 [2024-12-05 20:30:53.779636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.477 [2024-12-05 20:30:53.779654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.477 [2024-12-05 20:30:53.792191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.477 [2024-12-05 20:30:53.792209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.477 [2024-12-05 20:30:53.805724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.477 [2024-12-05 20:30:53.805741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.477 [2024-12-05 20:30:53.819503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.477 [2024-12-05 20:30:53.819521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.477 [2024-12-05 20:30:53.832212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.477 [2024-12-05 20:30:53.832230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.477 [2024-12-05 20:30:53.844888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.477 [2024-12-05 20:30:53.844906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.477 [2024-12-05 20:30:53.857468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.477 [2024-12-05 20:30:53.857485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.477 [2024-12-05 20:30:53.870425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.477 [2024-12-05 20:30:53.870443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.477 [2024-12-05 20:30:53.884126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.477 [2024-12-05 20:30:53.884144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.477 [2024-12-05 20:30:53.897054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.477 [2024-12-05 20:30:53.897077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.477 [2024-12-05 20:30:53.910143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.477 [2024-12-05 20:30:53.910159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.736 [2024-12-05 20:30:53.922731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.736 [2024-12-05 20:30:53.922755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.736 [2024-12-05 20:30:53.931716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.736 [2024-12-05 20:30:53.931733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.736 [2024-12-05 20:30:53.940583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.736 [2024-12-05 20:30:53.940600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.736 [2024-12-05 20:30:53.954265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.736 [2024-12-05 20:30:53.954281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.736 [2024-12-05 20:30:53.967045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.736 [2024-12-05 20:30:53.967070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.736 [2024-12-05 20:30:53.980268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.736 [2024-12-05 20:30:53.980285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.736 [2024-12-05 20:30:53.993707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.736 [2024-12-05 20:30:53.993723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.736 [2024-12-05 20:30:54.006725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.736 [2024-12-05 20:30:54.006742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.736 [2024-12-05 20:30:54.019738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.736 [2024-12-05 20:30:54.019755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.736 [2024-12-05 20:30:54.033011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.736 [2024-12-05 20:30:54.033028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.736 [2024-12-05 20:30:54.046401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.736 [2024-12-05 20:30:54.046418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.736 [2024-12-05 20:30:54.059683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.736 [2024-12-05 20:30:54.059699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.736 [2024-12-05 20:30:54.072527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.736 [2024-12-05 20:30:54.072544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.736 [2024-12-05 20:30:54.084887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.736 [2024-12-05 20:30:54.084904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.736 [2024-12-05 20:30:54.098218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.736 [2024-12-05 20:30:54.098235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.736 [2024-12-05 20:30:54.111230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.736 [2024-12-05 20:30:54.111246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.736 [2024-12-05 20:30:54.125064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.736 [2024-12-05 20:30:54.125081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.736 [2024-12-05 20:30:54.139120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.736 [2024-12-05 20:30:54.139136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.736 [2024-12-05 20:30:54.152591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.736 [2024-12-05 20:30:54.152608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.736 [2024-12-05 20:30:54.165961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.736 [2024-12-05 20:30:54.165978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.995 [2024-12-05 20:30:54.178733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.995 [2024-12-05 20:30:54.178750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.995 [2024-12-05 20:30:54.191273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.995 [2024-12-05 20:30:54.191290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.995 [2024-12-05 20:30:54.204921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.995 [2024-12-05 20:30:54.204937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.995 [2024-12-05 20:30:54.218481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.995 [2024-12-05 20:30:54.218498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.995 [2024-12-05 20:30:54.231927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.995 [2024-12-05 20:30:54.231944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.995 [2024-12-05 20:30:54.245126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.995 [2024-12-05 20:30:54.245143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.995 [2024-12-05 20:30:54.254648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.995 [2024-12-05 20:30:54.254665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.995 [2024-12-05 20:30:54.268461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.995 [2024-12-05 20:30:54.268478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.995 [2024-12-05 20:30:54.282057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.995 [2024-12-05 20:30:54.282080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.995 [2024-12-05 20:30:54.294858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.995 [2024-12-05 20:30:54.294876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.995 [2024-12-05 20:30:54.308234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.995 [2024-12-05 20:30:54.308251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.995 [2024-12-05 20:30:54.316700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.995 [2024-12-05 20:30:54.316716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.995 [2024-12-05 20:30:54.330348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.995 [2024-12-05 20:30:54.330365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.995 [2024-12-05 20:30:54.343094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.995 [2024-12-05 20:30:54.343111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.995 [2024-12-05 20:30:54.356588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.995 [2024-12-05 20:30:54.356606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.995 [2024-12-05 20:30:54.369298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.996 [2024-12-05 20:30:54.369317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.996 [2024-12-05 20:30:54.381842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.996 [2024-12-05 20:30:54.381859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.996 [2024-12-05 20:30:54.394521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.996 [2024-12-05 20:30:54.394537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.996 [2024-12-05 20:30:54.407169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.996 [2024-12-05 20:30:54.407186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.996 [2024-12-05 20:30:54.419948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.996 [2024-12-05 20:30:54.419966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.996 [2024-12-05 20:30:54.432907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.996 [2024-12-05 20:30:54.432923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.255 [2024-12-05 20:30:54.445922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.255 [2024-12-05 20:30:54.445938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.255 [2024-12-05 20:30:54.458827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.255 [2024-12-05 20:30:54.458844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.255 [2024-12-05 20:30:54.472467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.255 [2024-12-05 20:30:54.472484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.255 [2024-12-05 20:30:54.485594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.255 [2024-12-05 20:30:54.485611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.255 [2024-12-05 20:30:54.498680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.255 [2024-12-05 20:30:54.498698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.255 [2024-12-05 20:30:54.512178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.255 [2024-12-05 20:30:54.512197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.255 [2024-12-05 20:30:54.521461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.255 [2024-12-05 20:30:54.521479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.255 [2024-12-05 20:30:54.530636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.255 [2024-12-05 20:30:54.530652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.255 [2024-12-05 20:30:54.544523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.255 [2024-12-05 20:30:54.544540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.255 [2024-12-05 20:30:54.557605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.255 [2024-12-05 20:30:54.557622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.255 [2024-12-05 20:30:54.570362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.255 [2024-12-05 20:30:54.570379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.255 [2024-12-05 20:30:54.583753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.255 [2024-12-05 20:30:54.583769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.255 [2024-12-05 20:30:54.597567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.255 [2024-12-05 20:30:54.597584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.255 [2024-12-05 20:30:54.611082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.255 [2024-12-05 20:30:54.611099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.255 [2024-12-05 20:30:54.623795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.255 [2024-12-05 20:30:54.623812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.255 [2024-12-05 20:30:54.637260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.255 [2024-12-05 20:30:54.637277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.255 [2024-12-05 20:30:54.650129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.255 [2024-12-05 20:30:54.650145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.255 [2024-12-05 20:30:54.659273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.255 [2024-12-05 20:30:54.659289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.255 [2024-12-05 20:30:54.672444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.255 [2024-12-05 20:30:54.672460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.255 [2024-12-05 20:30:54.685995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.255 [2024-12-05 20:30:54.686012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.514 [2024-12-05 20:30:54.699208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.514 [2024-12-05 20:30:54.699225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.514 18420.00 IOPS, 143.91 MiB/s [2024-12-05T19:30:54.955Z] [2024-12-05 20:30:54.711863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.514 [2024-12-05 20:30:54.711880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.514 [2024-12-05 20:30:54.720244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.514 [2024-12-05 20:30:54.720260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.514 [2024-12-05 20:30:54.733725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.514 [2024-12-05 20:30:54.733742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.514 [2024-12-05 20:30:54.746859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.514 [2024-12-05 20:30:54.746876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.514 [2024-12-05 20:30:54.759926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.514 [2024-12-05 20:30:54.759944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.514 [2024-12-05 20:30:54.773186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.514 [2024-12-05 20:30:54.773204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.514 [2024-12-05 20:30:54.786874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.514 [2024-12-05 20:30:54.786891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.514 [2024-12-05 20:30:54.800330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.514 [2024-12-05 20:30:54.800347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.515 [2024-12-05 20:30:54.813221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.515 [2024-12-05 20:30:54.813238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.515 [2024-12-05 20:30:54.826462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.515 [2024-12-05 20:30:54.826479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.515 [2024-12-05 20:30:54.835754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.515 [2024-12-05 20:30:54.835770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.515 [2024-12-05 20:30:54.848910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.515 [2024-12-05 20:30:54.848928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.515 [2024-12-05 20:30:54.861681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.515 [2024-12-05 20:30:54.861698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.515 [2024-12-05 20:30:54.874440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.515 [2024-12-05 20:30:54.874461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.515 [2024-12-05 20:30:54.887637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.515 [2024-12-05 20:30:54.887654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.515 [2024-12-05 20:30:54.900366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.515 [2024-12-05 20:30:54.900383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.515 [2024-12-05 20:30:54.912764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.515 [2024-12-05 20:30:54.912780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.515 [2024-12-05 20:30:54.926007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.515 [2024-12-05 20:30:54.926023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.515 [2024-12-05 20:30:54.938866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.515 [2024-12-05 20:30:54.938883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.515 [2024-12-05 20:30:54.951604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.515 [2024-12-05 20:30:54.951621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.774 [2024-12-05 20:30:54.960137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.774 [2024-12-05 20:30:54.960154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.774 [2024-12-05 20:30:54.973791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.774 [2024-12-05 20:30:54.973809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.774 [2024-12-05 20:30:54.986797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.774 [2024-12-05 20:30:54.986815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.774 [2024-12-05 20:30:55.000412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.774 [2024-12-05 20:30:55.000429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.774 [2024-12-05 20:30:55.014141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.774 [2024-12-05 20:30:55.014158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.774 [2024-12-05 20:30:55.027007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.774 [2024-12-05 20:30:55.027025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.774 [2024-12-05 20:30:55.039778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.774 [2024-12-05 20:30:55.039795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.774 [2024-12-05 20:30:55.052592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.774 [2024-12-05 20:30:55.052609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.774 [2024-12-05 20:30:55.065208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.774 [2024-12-05 20:30:55.065226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.774 [2024-12-05 20:30:55.078719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.774 [2024-12-05 20:30:55.078736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.774 [2024-12-05 20:30:55.088420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.774 [2024-12-05 20:30:55.088436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.774 [2024-12-05 20:30:55.097779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.774 [2024-12-05 20:30:55.097796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.774 [2024-12-05 20:30:55.111740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.774 [2024-12-05 20:30:55.111762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.774 [2024-12-05 20:30:55.125511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.774 [2024-12-05 20:30:55.125528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.774 [2024-12-05 20:30:55.138365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.774 [2024-12-05 20:30:55.138381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.774 [2024-12-05 20:30:55.151362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.774 [2024-12-05 20:30:55.151379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.774 [2024-12-05 20:30:55.164798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.774 [2024-12-05 20:30:55.164814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.774 [2024-12-05 20:30:55.178167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.774 [2024-12-05 20:30:55.178184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.774 [2024-12-05 20:30:55.191826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.774 [2024-12-05 20:30:55.191844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.774 [2024-12-05 20:30:55.204695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.774 [2024-12-05 20:30:55.204712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.033 [2024-12-05 20:30:55.217845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.033 [2024-12-05 20:30:55.217863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.033 [2024-12-05 20:30:55.231270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.033 [2024-12-05 20:30:55.231287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.033 [2024-12-05 20:30:55.244803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.033 [2024-12-05 20:30:55.244821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.033 [2024-12-05 20:30:55.258247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.033 [2024-12-05 20:30:55.258264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.033 [2024-12-05 20:30:55.267345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.033 [2024-12-05 20:30:55.267362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.033 [2024-12-05 20:30:55.276368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.033 [2024-12-05 20:30:55.276384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.033 [2024-12-05 20:30:55.286099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.033 [2024-12-05 20:30:55.286115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.033 [2024-12-05 20:30:55.299809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.033 [2024-12-05 20:30:55.299826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.033 [2024-12-05 20:30:55.313442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.033 [2024-12-05 20:30:55.313459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.033 [2024-12-05 20:30:55.326576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.033 [2024-12-05 20:30:55.326593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.033 [2024-12-05 20:30:55.340544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.033 [2024-12-05 20:30:55.340562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.033 [2024-12-05 20:30:55.349770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.033 [2024-12-05 20:30:55.349791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.033 [2024-12-05 20:30:55.363718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.033 [2024-12-05 20:30:55.363735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.033 [2024-12-05 20:30:55.376633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.033 [2024-12-05 20:30:55.376650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.034 [2024-12-05 20:30:55.389514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.034 [2024-12-05 20:30:55.389530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.034 [2024-12-05 20:30:55.402809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.034 [2024-12-05 20:30:55.402826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.034 [2024-12-05 20:30:55.415873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.034 [2024-12-05 20:30:55.415890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.034 [2024-12-05 20:30:55.428865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.034 [2024-12-05 20:30:55.428882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.034 [2024-12-05 20:30:55.441114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.034 [2024-12-05 20:30:55.441131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.034 [2024-12-05 20:30:55.453621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.034 [2024-12-05 20:30:55.453638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.034 [2024-12-05 20:30:55.462766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.034 [2024-12-05 20:30:55.462782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.034 [2024-12-05 20:30:55.472140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.034 [2024-12-05 20:30:55.472157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.292 [2024-12-05 20:30:55.485547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.292 [2024-12-05 20:30:55.485564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.292 [2024-12-05 20:30:55.498353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.292 [2024-12-05 20:30:55.498369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.292 [2024-12-05 20:30:55.511744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.292 [2024-12-05 20:30:55.511761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.292 [2024-12-05 20:30:55.524717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.292 [2024-12-05 20:30:55.524735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.292 [2024-12-05 20:30:55.537949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.292 [2024-12-05 20:30:55.537967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.292 [2024-12-05 20:30:55.550943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.292 [2024-12-05 20:30:55.550961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.292 [2024-12-05 20:30:55.564266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.292 [2024-12-05 20:30:55.564283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.292 [2024-12-05 20:30:55.577840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.292 [2024-12-05 20:30:55.577857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.292 [2024-12-05 20:30:55.591467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.292 [2024-12-05 20:30:55.591489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.292 [2024-12-05 20:30:55.604162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.292 [2024-12-05 20:30:55.604178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.292 [2024-12-05 20:30:55.616952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.292 [2024-12-05 20:30:55.616969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.292 [2024-12-05 20:30:55.629567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.292 [2024-12-05 20:30:55.629584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.292 [2024-12-05 20:30:55.643376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.292 [2024-12-05 20:30:55.643392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.292 [2024-12-05 20:30:55.652623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.292 [2024-12-05 20:30:55.652640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.292 [2024-12-05 20:30:55.666462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.292 [2024-12-05 20:30:55.666479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.292 [2024-12-05 20:30:55.679620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.292 [2024-12-05 20:30:55.679637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.292 [2024-12-05 20:30:55.692691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.292 [2024-12-05 20:30:55.692707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.292 18513.50 IOPS, 144.64 MiB/s [2024-12-05T19:30:55.733Z] [2024-12-05 20:30:55.705647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.292 [2024-12-05 20:30:55.705664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.292 [2024-12-05 20:30:55.718961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.292 [2024-12-05 20:30:55.718977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.551 [2024-12-05 20:30:55.732680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.551 [2024-12-05 20:30:55.732698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.551 [2024-12-05 20:30:55.745891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.551 [2024-12-05 20:30:55.745910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.551 [2024-12-05 20:30:55.758928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.551 [2024-12-05 20:30:55.758946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.551 [2024-12-05 20:30:55.772130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.551 [2024-12-05 20:30:55.772147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.551 [2024-12-05 20:30:55.781395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.551 [2024-12-05 20:30:55.781411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.551 [2024-12-05 20:30:55.790641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.551 [2024-12-05 20:30:55.790657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.551 [2024-12-05 20:30:55.804224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.551 [2024-12-05 20:30:55.804241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.551 [2024-12-05 20:30:55.817527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.551 [2024-12-05 20:30:55.817544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.551 [2024-12-05 20:30:55.826929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.551 [2024-12-05 20:30:55.826947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.551 [2024-12-05 20:30:55.840495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.551 [2024-12-05 20:30:55.840513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.551 [2024-12-05 20:30:55.853820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.551 [2024-12-05 20:30:55.853837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.551 [2024-12-05 20:30:55.866947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.551 [2024-12-05 20:30:55.866964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.551 [2024-12-05 20:30:55.879812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.551 [2024-12-05 20:30:55.879829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.551 [2024-12-05 20:30:55.892586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.551 [2024-12-05 20:30:55.892603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.551 [2024-12-05 20:30:55.905425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.551 [2024-12-05 20:30:55.905443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.551 [2024-12-05 20:30:55.913455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.551 [2024-12-05 20:30:55.913471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.551 [2024-12-05 20:30:55.926871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.551 [2024-12-05 20:30:55.926888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.551 [2024-12-05 20:30:55.939952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.551 [2024-12-05 20:30:55.939969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.551 [2024-12-05 20:30:55.953581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.551 [2024-12-05 20:30:55.953598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.552 [2024-12-05 20:30:55.967038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.552 [2024-12-05 20:30:55.967065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.552 [2024-12-05 20:30:55.980554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.552 [2024-12-05 20:30:55.980571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.810 [2024-12-05 20:30:55.993523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.810 [2024-12-05 20:30:55.993540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.810 [2024-12-05 20:30:56.006168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.810 [2024-12-05 20:30:56.006185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.810 [2024-12-05 20:30:56.019147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.810 [2024-12-05 20:30:56.019163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.810 [2024-12-05 20:30:56.027518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.810 [2024-12-05 20:30:56.027534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.810 [2024-12-05 20:30:56.041263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.810 [2024-12-05 20:30:56.041279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.810 [2024-12-05 20:30:56.053984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.810 [2024-12-05 20:30:56.054000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.810 [2024-12-05 20:30:56.067094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.810 [2024-12-05 20:30:56.067110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.810 [2024-12-05 20:30:56.081210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.810 [2024-12-05 20:30:56.081227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.810 [2024-12-05 20:30:56.094693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.810 [2024-12-05 20:30:56.094710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.810 [2024-12-05 20:30:56.103709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.810 [2024-12-05 20:30:56.103726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.810 [2024-12-05 20:30:56.116768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.810 [2024-12-05 20:30:56.116785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.810 [2024-12-05 20:30:56.130144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.810 [2024-12-05 20:30:56.130162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.810 [2024-12-05 20:30:56.143191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.811 [2024-12-05 20:30:56.143208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.811 [2024-12-05 20:30:56.151527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.811 [2024-12-05 20:30:56.151542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.811 [2024-12-05 20:30:56.165411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.811 [2024-12-05 20:30:56.165427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.811 [2024-12-05 20:30:56.178901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.811 [2024-12-05 20:30:56.178918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.811 [2024-12-05 20:30:56.187189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.811 [2024-12-05 20:30:56.187206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.811 [2024-12-05 20:30:56.196337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.811 [2024-12-05 20:30:56.196353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.811 [2024-12-05 20:30:56.209718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.811 [2024-12-05 20:30:56.209735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.811 [2024-12-05 20:30:56.222293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.811 [2024-12-05 20:30:56.222321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.811 [2024-12-05 20:30:56.235609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.811 [2024-12-05 20:30:56.235626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.811 [2024-12-05 20:30:56.248221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.811 [2024-12-05 20:30:56.248238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.070 [2024-12-05 20:30:56.261191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.070 [2024-12-05 20:30:56.261207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.070 [2024-12-05 20:30:56.274610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.070 [2024-12-05 20:30:56.274628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.070 [2024-12-05 20:30:56.288140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.070 [2024-12-05 20:30:56.288157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.070 [2024-12-05 20:30:56.301592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.070 [2024-12-05 20:30:56.301609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.070 [2024-12-05 20:30:56.314981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.070 [2024-12-05 20:30:56.314999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.070 [2024-12-05 20:30:56.328458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.070 [2024-12-05 20:30:56.328477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.070 [2024-12-05 20:30:56.341242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.070 [2024-12-05 20:30:56.341258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.070 [2024-12-05 20:30:56.354641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.070 [2024-12-05 20:30:56.354658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.070 [2024-12-05 20:30:56.367470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.070 [2024-12-05 20:30:56.367487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.070 [2024-12-05 20:30:56.380845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.070 [2024-12-05 20:30:56.380862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.070 [2024-12-05 20:30:56.393905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.070 [2024-12-05 20:30:56.393923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.070 [2024-12-05 20:30:56.407692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.070 [2024-12-05 20:30:56.407709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.070 [2024-12-05 20:30:56.420867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.070 [2024-12-05 20:30:56.420884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.070 [2024-12-05 20:30:56.433605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.070 [2024-12-05 20:30:56.433623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.070 [2024-12-05 20:30:56.445963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.070 [2024-12-05 20:30:56.445981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.070 [2024-12-05 20:30:56.458848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.070 [2024-12-05 20:30:56.458865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.070 [2024-12-05 20:30:56.471887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.070 [2024-12-05 20:30:56.471905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.070 [2024-12-05 20:30:56.484912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.070 [2024-12-05 20:30:56.484929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.070 [2024-12-05 20:30:56.498165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.070 [2024-12-05 20:30:56.498182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.070 [2024-12-05 20:30:56.507977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.070 [2024-12-05 20:30:56.507994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.329 [2024-12-05 20:30:56.521312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.329 [2024-12-05 20:30:56.521329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.329 [2024-12-05 20:30:56.535300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.329 [2024-12-05 20:30:56.535320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.329 [2024-12-05 20:30:56.548636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.329 [2024-12-05 20:30:56.548653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.329 [2024-12-05 20:30:56.562079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.330 [2024-12-05 20:30:56.562113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.330 [2024-12-05 20:30:56.570928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.330 [2024-12-05 20:30:56.570945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.330 [2024-12-05 20:30:56.584477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.330 [2024-12-05 20:30:56.584494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.330 [2024-12-05 20:30:56.596950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.330 [2024-12-05 20:30:56.596967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.330 [2024-12-05 20:30:56.606262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.330 [2024-12-05 20:30:56.606278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.330 [2024-12-05 20:30:56.619804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.330 [2024-12-05 20:30:56.619821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.330 [2024-12-05 20:30:56.627988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.330 [2024-12-05 20:30:56.628005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.330 [2024-12-05 20:30:56.637192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.330 [2024-12-05 20:30:56.637209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.330 [2024-12-05 20:30:56.650558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.330 [2024-12-05 20:30:56.650576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.330 [2024-12-05 20:30:56.663453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.330 [2024-12-05 20:30:56.663470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.330 [2024-12-05 20:30:56.675831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.330 [2024-12-05 20:30:56.675849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.330 [2024-12-05 20:30:56.689357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.330 [2024-12-05 20:30:56.689374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.330 [2024-12-05 20:30:56.702429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.330 [2024-12-05 20:30:56.702446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.330 18516.33 IOPS, 144.66 MiB/s [2024-12-05T19:30:56.771Z] [2024-12-05 20:30:56.715219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.330 [2024-12-05 20:30:56.715236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.330 [2024-12-05 20:30:56.723586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.330 [2024-12-05 20:30:56.723603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.330 [2024-12-05 20:30:56.737262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.330 [2024-12-05 20:30:56.737279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.330 [2024-12-05 20:30:56.749967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.330 [2024-12-05 20:30:56.749985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.330 [2024-12-05 20:30:56.763184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.330 [2024-12-05 20:30:56.763207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.589 [2024-12-05 20:30:56.776832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.589 [2024-12-05 20:30:56.776849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.589 [2024-12-05 20:30:56.789775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.589 [2024-12-05 20:30:56.789792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.589 [2024-12-05 20:30:56.802710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.589 [2024-12-05 20:30:56.802728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.589 [2024-12-05 20:30:56.815349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.589 [2024-12-05 20:30:56.815366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.589 [2024-12-05 20:30:56.828221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.589 [2024-12-05 20:30:56.828239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.589 [2024-12-05 20:30:56.840577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.589 [2024-12-05 20:30:56.840594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.589 [2024-12-05 20:30:56.853724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.589 [2024-12-05 20:30:56.853741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.589 [2024-12-05 20:30:56.867278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.589 [2024-12-05 20:30:56.867295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.589 [2024-12-05 20:30:56.880830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.589 [2024-12-05 20:30:56.880847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.589 [2024-12-05 20:30:56.893379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.589 [2024-12-05 20:30:56.893395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.589 [2024-12-05 20:30:56.906517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.589 [2024-12-05 20:30:56.906534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.589 [2024-12-05 20:30:56.915550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.589 [2024-12-05 20:30:56.915566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.589 [2024-12-05 20:30:56.929075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.589 [2024-12-05 20:30:56.929092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.589 [2024-12-05 20:30:56.938324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.589 [2024-12-05 20:30:56.938341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.589 [2024-12-05 20:30:56.951084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.589 [2024-12-05 20:30:56.951101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.589 [2024-12-05 20:30:56.964215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.589 [2024-12-05 20:30:56.964232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.589 [2024-12-05 20:30:56.973513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.589 [2024-12-05 20:30:56.973530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.589 [2024-12-05 20:30:56.987453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.589 [2024-12-05 20:30:56.987469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.589 [2024-12-05 20:30:57.000586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.589 [2024-12-05 20:30:57.000607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.589 [2024-12-05 20:30:57.013992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.589 [2024-12-05 20:30:57.014009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.589 [2024-12-05 20:30:57.026894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.589 [2024-12-05 20:30:57.026911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.848 [2024-12-05 20:30:57.040408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.848 [2024-12-05 20:30:57.040424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.848 [2024-12-05 20:30:57.053952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.848 [2024-12-05 20:30:57.053969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.848 [2024-12-05 20:30:57.067630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.848 [2024-12-05 20:30:57.067647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.848 [2024-12-05 20:30:57.081149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.848 [2024-12-05 20:30:57.081167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.848 [2024-12-05 20:30:57.094841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.848 [2024-12-05 20:30:57.094857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.848 [2024-12-05 20:30:57.108325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.848 [2024-12-05 20:30:57.108344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.848 [2024-12-05 20:30:57.121202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.848 [2024-12-05 20:30:57.121220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.848 [2024-12-05 20:30:57.134144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.848 [2024-12-05 20:30:57.134162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.848 [2024-12-05 20:30:57.147040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.848 [2024-12-05 20:30:57.147062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.848 [2024-12-05 20:30:57.160586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.848 [2024-12-05 20:30:57.160603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.848 [2024-12-05 20:30:57.173679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.848 [2024-12-05 20:30:57.173696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.848 [2024-12-05 20:30:57.187295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.848 [2024-12-05 20:30:57.187323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.848 [2024-12-05 20:30:57.200746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.848 [2024-12-05 20:30:57.200763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.848 [2024-12-05 20:30:57.214630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.848 [2024-12-05 20:30:57.214652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.848 [2024-12-05 20:30:57.227726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.848 [2024-12-05 20:30:57.227743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.848 [2024-12-05 20:30:57.241101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.848 [2024-12-05 20:30:57.241118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.848 [2024-12-05 20:30:57.254952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.848 [2024-12-05 20:30:57.254969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.848 [2024-12-05 20:30:57.268318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.848 [2024-12-05 20:30:57.268335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.848 [2024-12-05 20:30:57.281797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.848 [2024-12-05 20:30:57.281815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.106 [2024-12-05 20:30:57.295502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.106 [2024-12-05 20:30:57.295518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.106 [2024-12-05 20:30:57.307901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.106 [2024-12-05 20:30:57.307918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.106 [2024-12-05 20:30:57.320864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.107 [2024-12-05 20:30:57.320882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.107 [2024-12-05 20:30:57.333903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.107 [2024-12-05 20:30:57.333920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.107 [2024-12-05 20:30:57.347605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.107 [2024-12-05 20:30:57.347622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.107 [2024-12-05 20:30:57.360488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.107 [2024-12-05 20:30:57.360504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.107 [2024-12-05 20:30:57.373207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.107 [2024-12-05 20:30:57.373224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.107 [2024-12-05 20:30:57.385649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.107 [2024-12-05 20:30:57.385667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.107 [2024-12-05 20:30:57.398670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.107 [2024-12-05 20:30:57.398686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.107 [2024-12-05 20:30:57.411652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.107 [2024-12-05 20:30:57.411669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.107 [2024-12-05 20:30:57.425480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.107 [2024-12-05 20:30:57.425498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.107 [2024-12-05 20:30:57.438601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.107 [2024-12-05 20:30:57.438618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.107 [2024-12-05 20:30:57.451450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.107 [2024-12-05 20:30:57.451467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.107 [2024-12-05 20:30:57.464224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.107 [2024-12-05 20:30:57.464241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.107 [2024-12-05 20:30:57.477115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.107 [2024-12-05 20:30:57.477133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.107 [2024-12-05 20:30:57.490529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.107 [2024-12-05 20:30:57.490545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.107 [2024-12-05 20:30:57.503203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.107 [2024-12-05 20:30:57.503220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.107 [2024-12-05 20:30:57.515843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.107 [2024-12-05 20:30:57.515860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.107 [2024-12-05 20:30:57.528798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.107 [2024-12-05 20:30:57.528815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.107 [2024-12-05 20:30:57.541836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.107 [2024-12-05 20:30:57.541852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.365 [2024-12-05 20:30:57.550545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.365 [2024-12-05 20:30:57.550561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.365 [2024-12-05 20:30:57.559482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.365 [2024-12-05 20:30:57.559498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.365 [2024-12-05 20:30:57.572867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.365 [2024-12-05 20:30:57.572884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.365 [2024-12-05 20:30:57.585589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.365 [2024-12-05 20:30:57.585606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.365 [2024-12-05 20:30:57.598210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.365 [2024-12-05 20:30:57.598228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.365 [2024-12-05 20:30:57.611339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.365 [2024-12-05 20:30:57.611356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.365 [2024-12-05 20:30:57.624278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.365 [2024-12-05 20:30:57.624295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.365 [2024-12-05 20:30:57.637363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.365 [2024-12-05 20:30:57.637379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.365 [2024-12-05 20:30:57.650843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.365 [2024-12-05 20:30:57.650860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.365 [2024-12-05 20:30:57.663479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.365 [2024-12-05 20:30:57.663496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.365 [2024-12-05 20:30:57.676984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.365 [2024-12-05 20:30:57.677001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.365 [2024-12-05 20:30:57.690453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.365 [2024-12-05 20:30:57.690470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.365 [2024-12-05 20:30:57.704021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.365 [2024-12-05 20:30:57.704038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.365 18546.00 IOPS, 144.89 MiB/s [2024-12-05T19:30:57.806Z] [2024-12-05 20:30:57.716956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.366 [2024-12-05 20:30:57.716973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.366 [2024-12-05 20:30:57.730703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.366 [2024-12-05 20:30:57.730724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.366 [2024-12-05 20:30:57.744063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.366 [2024-12-05 20:30:57.744087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.366 [2024-12-05 20:30:57.753223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.366 [2024-12-05 20:30:57.753239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.366 [2024-12-05 20:30:57.766237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.366 [2024-12-05 20:30:57.766254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.366 [2024-12-05 20:30:57.778996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.366 [2024-12-05 20:30:57.779014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.366 [2024-12-05 20:30:57.792452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.366 [2024-12-05 20:30:57.792469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.624 [2024-12-05 20:30:57.806010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.624 [2024-12-05 20:30:57.806027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.624 [2024-12-05 20:30:57.814855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.624 [2024-12-05 20:30:57.814872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.624 [2024-12-05 20:30:57.828132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.624 [2024-12-05 20:30:57.828149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.624 [2024-12-05 20:30:57.840541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.624 [2024-12-05 20:30:57.840557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.624 [2024-12-05 20:30:57.854244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.624 [2024-12-05 20:30:57.854262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.624 [2024-12-05 20:30:57.867231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.624 [2024-12-05 20:30:57.867247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.624 [2024-12-05 20:30:57.880020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.624 [2024-12-05 20:30:57.880038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.624 [2024-12-05 20:30:57.892547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.624 [2024-12-05 20:30:57.892565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.624 [2024-12-05 20:30:57.904780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.624 [2024-12-05 20:30:57.904797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.624 [2024-12-05 20:30:57.917882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.624 [2024-12-05 20:30:57.917901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.624 [2024-12-05 20:30:57.930923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.624 [2024-12-05 20:30:57.930940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.624 [2024-12-05 20:30:57.944155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.624 [2024-12-05 20:30:57.944173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.624 [2024-12-05 20:30:57.957856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.624 [2024-12-05 20:30:57.957874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.624 [2024-12-05 20:30:57.970600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.624 [2024-12-05 20:30:57.970621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.624 [2024-12-05 20:30:57.984371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.624 [2024-12-05 20:30:57.984389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.624 [2024-12-05 20:30:57.998133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.624 [2024-12-05 20:30:57.998151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.624 [2024-12-05 20:30:58.006931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.624 [2024-12-05 20:30:58.006948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.624 [2024-12-05 20:30:58.020593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.624 [2024-12-05 20:30:58.020610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.624 [2024-12-05 20:30:58.033766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.624 [2024-12-05 20:30:58.033784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.624 [2024-12-05 20:30:58.043064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.624 [2024-12-05 20:30:58.043081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.624 [2024-12-05 20:30:58.052337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.624 [2024-12-05 20:30:58.052355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.883 [2024-12-05 20:30:58.065252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.883 [2024-12-05 20:30:58.065269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.883 [2024-12-05 20:30:58.078002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.883 [2024-12-05 20:30:58.078020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.883 [2024-12-05 20:30:58.090750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.883 [2024-12-05 20:30:58.090767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.883 [2024-12-05 20:30:58.104083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.883 [2024-12-05 20:30:58.104100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.883 [2024-12-05 20:30:58.117599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.883 [2024-12-05 20:30:58.117616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.883 [2024-12-05 20:30:58.130910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.883 [2024-12-05 20:30:58.130928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.883 [2024-12-05 20:30:58.143957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.883 [2024-12-05 20:30:58.143975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.883 [2024-12-05 20:30:58.156868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.883 [2024-12-05 20:30:58.156885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.883 [2024-12-05 20:30:58.165173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.883 [2024-12-05 20:30:58.165189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.883 [2024-12-05 20:30:58.178485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.883 [2024-12-05 20:30:58.178502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.883 [2024-12-05 20:30:58.191389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.883 [2024-12-05 20:30:58.191406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.883 [2024-12-05 20:30:58.205110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.883 [2024-12-05 20:30:58.205132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.883 [2024-12-05 20:30:58.218345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.883 [2024-12-05 20:30:58.218361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.883 [2024-12-05 20:30:58.231786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.883 [2024-12-05 20:30:58.231803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.883 [2024-12-05 20:30:58.240863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.883 [2024-12-05 20:30:58.240880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.883 [2024-12-05 20:30:58.254394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.883 [2024-12-05 20:30:58.254411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.883 [2024-12-05 20:30:58.267911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.883 [2024-12-05 20:30:58.267928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.883 [2024-12-05 20:30:58.281439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.883 [2024-12-05 20:30:58.281457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.883 [2024-12-05 20:30:58.294767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.883 [2024-12-05 20:30:58.294784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.883 [2024-12-05 20:30:58.308333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.883 [2024-12-05 20:30:58.308350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.883 [2024-12-05 20:30:58.322025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.883 [2024-12-05 20:30:58.322041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.142 [2024-12-05 20:30:58.335260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.142 [2024-12-05 20:30:58.335277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.142 [2024-12-05 20:30:58.348853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.142 [2024-12-05 20:30:58.348870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.142 [2024-12-05 20:30:58.358204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.142 [2024-12-05 20:30:58.358220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.142 [2024-12-05 20:30:58.370983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.142 [2024-12-05 20:30:58.371000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.142 [2024-12-05 20:30:58.384570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.142 [2024-12-05 20:30:58.384586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.142 [2024-12-05 20:30:58.397538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.142 [2024-12-05 20:30:58.397555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.142 [2024-12-05 20:30:58.410620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.142 [2024-12-05 20:30:58.410636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.142 [2024-12-05 20:30:58.423575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.142 [2024-12-05 20:30:58.423592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.142 [2024-12-05 20:30:58.436014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.142 [2024-12-05 20:30:58.436031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.142 [2024-12-05 20:30:58.449170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.142 [2024-12-05 20:30:58.449192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.142 [2024-12-05 20:30:58.462450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.142 [2024-12-05 20:30:58.462467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.142 [2024-12-05 20:30:58.476205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.142 [2024-12-05 20:30:58.476222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.142 [2024-12-05 20:30:58.489591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.142 [2024-12-05 20:30:58.489608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.142 [2024-12-05 20:30:58.502770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.142 [2024-12-05 20:30:58.502788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.142 [2024-12-05 20:30:58.512087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.142 [2024-12-05 20:30:58.512104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.142 [2024-12-05 20:30:58.525639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.142 [2024-12-05 20:30:58.525656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.142 [2024-12-05 20:30:58.539191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.142 [2024-12-05 20:30:58.539208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.142 [2024-12-05 20:30:58.552697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.142 [2024-12-05 20:30:58.552714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.142 [2024-12-05 20:30:58.566638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.142 [2024-12-05 20:30:58.566655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.142 [2024-12-05 20:30:58.579382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.142 [2024-12-05 20:30:58.579398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.402 [2024-12-05 20:30:58.592425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.402 [2024-12-05 20:30:58.592441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.402 [2024-12-05 20:30:58.605594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.402 [2024-12-05 20:30:58.605612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.402 [2024-12-05 20:30:58.618668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.402 [2024-12-05 20:30:58.618686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.402 [2024-12-05 20:30:58.631656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.402 [2024-12-05 20:30:58.631673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.402 [2024-12-05 20:30:58.644095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.402 [2024-12-05 20:30:58.644112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.402 [2024-12-05 20:30:58.657173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.402 [2024-12-05 20:30:58.657190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.402 [2024-12-05 20:30:58.670196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.402 [2024-12-05 20:30:58.670213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.402 [2024-12-05 20:30:58.682754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.402 [2024-12-05 20:30:58.682771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.402 [2024-12-05 20:30:58.695832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.402 [2024-12-05 20:30:58.695848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.402 [2024-12-05 20:30:58.708648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.402 [2024-12-05 20:30:58.708664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.402 18547.60 IOPS, 144.90 MiB/s 00:11:05.402 Latency(us) 00:11:05.402 [2024-12-05T19:30:58.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:05.402 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:05.402 Nvme1n1 : 5.00 18555.56 144.97 0.00 0.00 6892.97 3038.49 18469.24 00:11:05.402 [2024-12-05T19:30:58.843Z] =================================================================================================================== 00:11:05.402 [2024-12-05T19:30:58.843Z] Total : 18555.56 144.97 0.00 0.00 6892.97 3038.49 18469.24 00:11:05.402 [2024-12-05 20:30:58.718712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.402 [2024-12-05 20:30:58.718728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.402 [2024-12-05 20:30:58.730744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.402 [2024-12-05 20:30:58.730759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.402 [2024-12-05 20:30:58.742787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.402 [2024-12-05 20:30:58.742805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.402 [2024-12-05 20:30:58.754811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.402 [2024-12-05 20:30:58.754827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.402 [2024-12-05 20:30:58.766842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.402 [2024-12-05 20:30:58.766857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.402 [2024-12-05 20:30:58.778870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.402 [2024-12-05 20:30:58.778885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.402 [2024-12-05 20:30:58.790905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.402 [2024-12-05 20:30:58.790919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.402 [2024-12-05 20:30:58.802935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.402 [2024-12-05 20:30:58.802949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.402 [2024-12-05 20:30:58.814966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.402 [2024-12-05 20:30:58.814981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.402 [2024-12-05 20:30:58.826996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.402 [2024-12-05 20:30:58.827007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.402 [2024-12-05 20:30:58.839023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.402 [2024-12-05 20:30:58.839032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.661 [2024-12-05 20:30:58.851066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.661 [2024-12-05 20:30:58.851080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.661 [2024-12-05 20:30:58.863092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.661 [2024-12-05 20:30:58.863100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (232607) - No such process 00:11:05.661 20:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 232607 00:11:05.661 20:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.661 20:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.661 20:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:05.661 20:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.661 20:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:05.661 20:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.661 20:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:05.661 delay0 00:11:05.661 20:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.661 20:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:05.661 20:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.661 20:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:05.661 20:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.661 20:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:05.661 [2024-12-05 20:30:59.017793] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:12.252 [2024-12-05 20:31:05.099403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1080 is same with the state(6) to be set 00:11:12.252 Initializing NVMe Controllers 00:11:12.252 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:12.252 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:12.252 Initialization complete. Launching workers. 00:11:12.252 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 312 00:11:12.252 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 599, failed to submit 33 00:11:12.252 success 408, unsuccessful 191, failed 0 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:12.252 rmmod nvme_tcp 00:11:12.252 rmmod nvme_fabrics 00:11:12.252 rmmod nvme_keyring 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 230731 ']' 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 230731 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 230731 ']' 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 230731 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 230731 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 230731' 00:11:12.252 killing process with pid 230731 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 230731 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 230731 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.252 20:31:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.156 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:14.156 00:11:14.156 real 0m31.315s 00:11:14.156 user 0m43.008s 00:11:14.156 sys 0m9.856s 00:11:14.156 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.156 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:14.156 ************************************ 00:11:14.156 END TEST nvmf_zcopy 00:11:14.156 ************************************ 00:11:14.156 20:31:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:14.156 20:31:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:14.156 20:31:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.156 20:31:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:14.156 ************************************ 00:11:14.156 START TEST nvmf_nmic 00:11:14.156 ************************************ 00:11:14.157 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:14.417 * Looking for test storage... 00:11:14.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:14.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.417 --rc genhtml_branch_coverage=1 00:11:14.417 --rc genhtml_function_coverage=1 00:11:14.417 --rc genhtml_legend=1 00:11:14.417 --rc geninfo_all_blocks=1 00:11:14.417 --rc geninfo_unexecuted_blocks=1 00:11:14.417 00:11:14.417 ' 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:14.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.417 --rc genhtml_branch_coverage=1 00:11:14.417 --rc genhtml_function_coverage=1 00:11:14.417 --rc genhtml_legend=1 00:11:14.417 --rc geninfo_all_blocks=1 00:11:14.417 --rc geninfo_unexecuted_blocks=1 00:11:14.417 00:11:14.417 ' 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:14.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.417 --rc genhtml_branch_coverage=1 00:11:14.417 --rc genhtml_function_coverage=1 00:11:14.417 --rc genhtml_legend=1 00:11:14.417 --rc geninfo_all_blocks=1 00:11:14.417 --rc geninfo_unexecuted_blocks=1 00:11:14.417 00:11:14.417 ' 00:11:14.417 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:14.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.418 --rc genhtml_branch_coverage=1 00:11:14.418 --rc genhtml_function_coverage=1 00:11:14.418 --rc genhtml_legend=1 00:11:14.418 --rc geninfo_all_blocks=1 00:11:14.418 --rc geninfo_unexecuted_blocks=1 00:11:14.418 00:11:14.418 ' 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:14.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:11:14.418 20:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:21.000 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:21.000 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:21.000 Found net devices under 0000:af:00.0: cvl_0_0 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:21.000 Found net devices under 0000:af:00.1: cvl_0_1 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:21.000 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:21.001 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:21.001 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:21.001 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:21.001 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:21.001 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:21.001 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:21.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:11:21.001 00:11:21.001 --- 10.0.0.2 ping statistics --- 00:11:21.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.001 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:11:21.001 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:21.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:11:21.001 00:11:21.001 --- 10.0.0.1 ping statistics --- 00:11:21.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.001 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:11:21.001 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.001 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:11:21.001 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:21.001 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.001 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:21.001 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:21.001 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.001 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:21.001 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:21.001 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:21.001 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:21.001 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:21.001 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:21.001 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=238444 00:11:21.001 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 238444 00:11:21.001 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:21.001 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 238444 ']' 00:11:21.001 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.001 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.001 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.001 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.001 20:31:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:21.001 [2024-12-05 20:31:13.798271] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:11:21.001 [2024-12-05 20:31:13.798318] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.001 [2024-12-05 20:31:13.872795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:21.001 [2024-12-05 20:31:13.914798] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.001 [2024-12-05 20:31:13.914829] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.001 [2024-12-05 20:31:13.914836] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.001 [2024-12-05 20:31:13.914842] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.001 [2024-12-05 20:31:13.914846] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.001 [2024-12-05 20:31:13.916277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.001 [2024-12-05 20:31:13.916311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.001 [2024-12-05 20:31:13.916341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.001 [2024-12-05 20:31:13.916343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:21.261 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.261 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:21.261 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:21.261 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:21.261 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:21.261 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.261 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:21.261 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.261 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:21.261 [2024-12-05 20:31:14.650422] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:21.261 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.261 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:21.261 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.261 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:21.261 Malloc0 00:11:21.261 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.261 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:21.261 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.261 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:21.262 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.262 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:21.262 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.262 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:21.521 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.521 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:21.521 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.521 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:21.521 [2024-12-05 20:31:14.710816] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:21.521 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.521 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:21.521 test case1: single bdev can't be used in multiple subsystems 00:11:21.521 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:21.521 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.521 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:21.521 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.521 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:21.521 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.521 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:21.521 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.521 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:21.521 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:21.521 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.521 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:21.521 [2024-12-05 20:31:14.738747] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:21.521 [2024-12-05 20:31:14.738766] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:21.521 [2024-12-05 20:31:14.738772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.521 request: 00:11:21.521 { 00:11:21.521 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:21.521 "namespace": { 00:11:21.521 "bdev_name": "Malloc0", 00:11:21.521 "no_auto_visible": false, 00:11:21.521 "hide_metadata": false 00:11:21.521 }, 00:11:21.521 "method": "nvmf_subsystem_add_ns", 00:11:21.521 "req_id": 1 00:11:21.521 } 00:11:21.521 Got JSON-RPC error response 00:11:21.521 response: 00:11:21.521 { 00:11:21.521 "code": -32602, 00:11:21.521 "message": "Invalid parameters" 00:11:21.521 } 00:11:21.521 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:21.521 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:21.521 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:21.521 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:21.521 Adding namespace failed - expected result. 00:11:21.522 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:21.522 test case2: host connect to nvmf target in multiple paths 00:11:21.522 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:21.522 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.522 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:21.522 [2024-12-05 20:31:14.750857] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:21.522 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.522 20:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:22.949 20:31:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:24.326 20:31:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:24.326 20:31:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:24.326 20:31:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:24.326 20:31:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:24.326 20:31:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:11:26.230 20:31:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:26.230 20:31:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:26.230 20:31:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:26.230 20:31:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:26.230 20:31:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:26.230 20:31:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:11:26.230 20:31:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:26.230 [global] 00:11:26.230 thread=1 00:11:26.230 invalidate=1 00:11:26.230 rw=write 00:11:26.230 time_based=1 00:11:26.230 runtime=1 00:11:26.230 ioengine=libaio 00:11:26.230 direct=1 00:11:26.230 bs=4096 00:11:26.230 iodepth=1 00:11:26.230 norandommap=0 00:11:26.230 numjobs=1 00:11:26.230 00:11:26.230 verify_dump=1 00:11:26.230 verify_backlog=512 00:11:26.230 verify_state_save=0 00:11:26.230 do_verify=1 00:11:26.230 verify=crc32c-intel 00:11:26.230 [job0] 00:11:26.230 filename=/dev/nvme0n1 00:11:26.230 Could not set queue depth (nvme0n1) 00:11:26.825 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:26.825 fio-3.35 00:11:26.825 Starting 1 thread 00:11:27.758 00:11:27.758 job0: (groupid=0, jobs=1): err= 0: pid=239809: Thu Dec 5 20:31:21 2024 00:11:27.758 read: IOPS=2040, BW=8163KiB/s (8359kB/s)(8432KiB/1033msec) 00:11:27.758 slat (nsec): min=7178, max=36348, avg=8424.20, stdev=1357.17 00:11:27.758 clat (usec): min=166, max=40987, avg=289.25, stdev=1533.27 00:11:27.758 lat (usec): min=173, max=41009, avg=297.67, stdev=1533.66 00:11:27.758 clat percentiles (usec): 00:11:27.758 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 219], 00:11:27.758 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 229], 60.00th=[ 233], 00:11:27.758 | 70.00th=[ 239], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 277], 00:11:27.758 | 99.00th=[ 289], 99.50th=[ 330], 99.90th=[40633], 99.95th=[41157], 00:11:27.758 | 99.99th=[41157] 00:11:27.758 write: IOPS=2478, BW=9913KiB/s (10.1MB/s)(10.0MiB/1033msec); 0 zone resets 00:11:27.758 slat (usec): min=10, max=23930, avg=20.87, stdev=472.75 00:11:27.758 clat (usec): min=109, max=335, avg=132.01, stdev=13.97 00:11:27.758 lat (usec): min=120, max=24266, avg=152.87, stdev=476.98 00:11:27.758 clat percentiles (usec): 00:11:27.758 | 1.00th=[ 118], 5.00th=[ 121], 10.00th=[ 123], 20.00th=[ 125], 00:11:27.758 | 30.00th=[ 126], 40.00th=[ 127], 50.00th=[ 129], 60.00th=[ 130], 00:11:27.758 | 70.00th=[ 133], 80.00th=[ 135], 90.00th=[ 151], 95.00th=[ 169], 00:11:27.758 | 99.00th=[ 180], 99.50th=[ 182], 99.90th=[ 223], 99.95th=[ 260], 00:11:27.758 | 99.99th=[ 338] 00:11:27.758 bw ( KiB/s): min= 8192, max=12288, per=100.00%, avg=10240.00, stdev=2896.31, samples=2 00:11:27.758 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:11:27.758 lat (usec) : 250=89.93%, 500=9.98%, 750=0.02% 00:11:27.758 lat (msec) : 50=0.06% 00:11:27.758 cpu : usr=3.59%, sys=7.36%, ctx=4672, majf=0, minf=1 00:11:27.758 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:27.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.758 issued rwts: total=2108,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.758 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:27.758 00:11:27.758 Run status group 0 (all jobs): 00:11:27.758 READ: bw=8163KiB/s (8359kB/s), 8163KiB/s-8163KiB/s (8359kB/s-8359kB/s), io=8432KiB (8634kB), run=1033-1033msec 00:11:27.758 WRITE: bw=9913KiB/s (10.1MB/s), 9913KiB/s-9913KiB/s (10.1MB/s-10.1MB/s), io=10.0MiB (10.5MB), run=1033-1033msec 00:11:27.758 00:11:27.758 Disk stats (read/write): 00:11:27.758 nvme0n1: ios=2110/2560, merge=0/0, ticks=767/313, in_queue=1080, util=98.60% 00:11:27.758 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:28.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:28.016 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:28.016 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:28.016 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:28.016 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.016 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:28.016 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.016 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:28.016 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:28.016 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:28.016 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:28.016 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:28.016 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:28.016 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:28.016 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:28.016 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:28.016 rmmod nvme_tcp 00:11:28.016 rmmod nvme_fabrics 00:11:28.016 rmmod nvme_keyring 00:11:28.016 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:28.016 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:28.016 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:28.016 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 238444 ']' 00:11:28.016 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 238444 00:11:28.016 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 238444 ']' 00:11:28.016 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 238444 00:11:28.016 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:28.016 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:28.016 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 238444 00:11:28.016 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:28.016 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:28.016 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 238444' 00:11:28.016 killing process with pid 238444 00:11:28.016 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 238444 00:11:28.016 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 238444 00:11:28.276 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:28.276 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:28.276 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:28.276 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:28.276 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:28.276 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:28.276 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:28.276 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:28.276 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:28.276 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.276 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.276 20:31:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.813 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:30.813 00:11:30.814 real 0m16.173s 00:11:30.814 user 0m40.656s 00:11:30.814 sys 0m5.588s 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:30.814 ************************************ 00:11:30.814 END TEST nvmf_nmic 00:11:30.814 ************************************ 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:30.814 ************************************ 00:11:30.814 START TEST nvmf_fio_target 00:11:30.814 ************************************ 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:30.814 * Looking for test storage... 00:11:30.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:30.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.814 --rc genhtml_branch_coverage=1 00:11:30.814 --rc genhtml_function_coverage=1 00:11:30.814 --rc genhtml_legend=1 00:11:30.814 --rc geninfo_all_blocks=1 00:11:30.814 --rc geninfo_unexecuted_blocks=1 00:11:30.814 00:11:30.814 ' 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:30.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.814 --rc genhtml_branch_coverage=1 00:11:30.814 --rc genhtml_function_coverage=1 00:11:30.814 --rc genhtml_legend=1 00:11:30.814 --rc geninfo_all_blocks=1 00:11:30.814 --rc geninfo_unexecuted_blocks=1 00:11:30.814 00:11:30.814 ' 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:30.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.814 --rc genhtml_branch_coverage=1 00:11:30.814 --rc genhtml_function_coverage=1 00:11:30.814 --rc genhtml_legend=1 00:11:30.814 --rc geninfo_all_blocks=1 00:11:30.814 --rc geninfo_unexecuted_blocks=1 00:11:30.814 00:11:30.814 ' 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:30.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.814 --rc genhtml_branch_coverage=1 00:11:30.814 --rc genhtml_function_coverage=1 00:11:30.814 --rc genhtml_legend=1 00:11:30.814 --rc geninfo_all_blocks=1 00:11:30.814 --rc geninfo_unexecuted_blocks=1 00:11:30.814 00:11:30.814 ' 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.814 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.815 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.815 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:30.815 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.815 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:30.815 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:30.815 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:30.815 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:30.815 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.815 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.815 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:30.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:30.815 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:30.815 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:30.815 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:30.815 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:30.815 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:30.815 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:30.815 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:30.815 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:30.815 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:30.815 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:30.815 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:30.815 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:30.815 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.815 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.815 20:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.815 20:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:30.815 20:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:30.815 20:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:30.815 20:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:37.391 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:37.391 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:37.391 Found net devices under 0000:af:00.0: cvl_0_0 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:37.391 Found net devices under 0000:af:00.1: cvl_0_1 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:37.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:37.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:11:37.391 00:11:37.391 --- 10.0.0.2 ping statistics --- 00:11:37.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.391 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:11:37.391 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:37.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:37.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:11:37.391 00:11:37.391 --- 10.0.0.1 ping statistics --- 00:11:37.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.391 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:11:37.392 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:37.392 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:11:37.392 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:37.392 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:37.392 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:37.392 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:37.392 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:37.392 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:37.392 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:37.392 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:37.392 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:37.392 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:37.392 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.392 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=243682 00:11:37.392 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:37.392 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 243682 00:11:37.392 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 243682 ']' 00:11:37.392 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.392 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:37.392 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.392 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:37.392 20:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.392 [2024-12-05 20:31:30.020663] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:11:37.392 [2024-12-05 20:31:30.020704] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.392 [2024-12-05 20:31:30.098061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:37.392 [2024-12-05 20:31:30.137914] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:37.392 [2024-12-05 20:31:30.137951] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:37.392 [2024-12-05 20:31:30.137960] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:37.392 [2024-12-05 20:31:30.137965] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:37.392 [2024-12-05 20:31:30.137970] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:37.392 [2024-12-05 20:31:30.139563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.392 [2024-12-05 20:31:30.139583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:37.392 [2024-12-05 20:31:30.139674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.392 [2024-12-05 20:31:30.139676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.650 20:31:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:37.650 20:31:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:37.650 20:31:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:37.650 20:31:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:37.650 20:31:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.650 20:31:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:37.650 20:31:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:37.650 [2024-12-05 20:31:31.030499] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:37.650 20:31:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:37.907 20:31:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:37.907 20:31:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:38.165 20:31:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:38.165 20:31:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:38.422 20:31:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:38.422 20:31:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:38.679 20:31:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:38.679 20:31:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:38.679 20:31:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:38.936 20:31:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:38.936 20:31:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:39.193 20:31:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:39.193 20:31:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:39.449 20:31:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:39.449 20:31:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:39.449 20:31:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:39.706 20:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:39.706 20:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:39.964 20:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:39.964 20:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:40.221 20:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:40.222 [2024-12-05 20:31:33.568283] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:40.222 20:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:40.480 20:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:40.738 20:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:42.112 20:31:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:42.112 20:31:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:42.112 20:31:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:42.112 20:31:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:42.112 20:31:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:42.112 20:31:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:44.012 20:31:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:44.012 20:31:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:44.012 20:31:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:44.012 20:31:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:44.012 20:31:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:44.012 20:31:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:44.012 20:31:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:44.012 [global] 00:11:44.012 thread=1 00:11:44.012 invalidate=1 00:11:44.012 rw=write 00:11:44.012 time_based=1 00:11:44.012 runtime=1 00:11:44.012 ioengine=libaio 00:11:44.012 direct=1 00:11:44.012 bs=4096 00:11:44.012 iodepth=1 00:11:44.012 norandommap=0 00:11:44.012 numjobs=1 00:11:44.012 00:11:44.012 verify_dump=1 00:11:44.012 verify_backlog=512 00:11:44.012 verify_state_save=0 00:11:44.012 do_verify=1 00:11:44.012 verify=crc32c-intel 00:11:44.012 [job0] 00:11:44.012 filename=/dev/nvme0n1 00:11:44.012 [job1] 00:11:44.012 filename=/dev/nvme0n2 00:11:44.012 [job2] 00:11:44.012 filename=/dev/nvme0n3 00:11:44.012 [job3] 00:11:44.012 filename=/dev/nvme0n4 00:11:44.012 Could not set queue depth (nvme0n1) 00:11:44.012 Could not set queue depth (nvme0n2) 00:11:44.012 Could not set queue depth (nvme0n3) 00:11:44.012 Could not set queue depth (nvme0n4) 00:11:44.270 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:44.270 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:44.270 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:44.270 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:44.270 fio-3.35 00:11:44.270 Starting 4 threads 00:11:45.646 00:11:45.646 job0: (groupid=0, jobs=1): err= 0: pid=245447: Thu Dec 5 20:31:38 2024 00:11:45.646 read: IOPS=2040, BW=8164KiB/s (8360kB/s)(8172KiB/1001msec) 00:11:45.646 slat (nsec): min=7300, max=24214, avg=8740.68, stdev=1206.97 00:11:45.646 clat (usec): min=175, max=532, avg=284.74, stdev=66.29 00:11:45.646 lat (usec): min=184, max=541, avg=293.48, stdev=66.41 00:11:45.646 clat percentiles (usec): 00:11:45.646 | 1.00th=[ 208], 5.00th=[ 229], 10.00th=[ 237], 20.00th=[ 245], 00:11:45.646 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 273], 00:11:45.646 | 70.00th=[ 281], 80.00th=[ 310], 90.00th=[ 355], 95.00th=[ 469], 00:11:45.646 | 99.00th=[ 506], 99.50th=[ 515], 99.90th=[ 519], 99.95th=[ 523], 00:11:45.646 | 99.99th=[ 537] 00:11:45.646 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:45.646 slat (nsec): min=10895, max=51107, avg=12267.37, stdev=1933.28 00:11:45.646 clat (usec): min=116, max=323, avg=176.98, stdev=39.41 00:11:45.646 lat (usec): min=127, max=374, avg=189.25, stdev=39.67 00:11:45.646 clat percentiles (usec): 00:11:45.646 | 1.00th=[ 130], 5.00th=[ 137], 10.00th=[ 143], 20.00th=[ 147], 00:11:45.646 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 163], 00:11:45.646 | 70.00th=[ 184], 80.00th=[ 237], 90.00th=[ 241], 95.00th=[ 243], 00:11:45.646 | 99.00th=[ 247], 99.50th=[ 251], 99.90th=[ 277], 99.95th=[ 281], 00:11:45.646 | 99.99th=[ 326] 00:11:45.646 bw ( KiB/s): min= 8192, max= 8192, per=29.55%, avg=8192.00, stdev= 0.00, samples=1 00:11:45.646 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:45.646 lat (usec) : 250=64.14%, 500=35.00%, 750=0.86% 00:11:45.646 cpu : usr=3.20%, sys=7.10%, ctx=4092, majf=0, minf=1 00:11:45.646 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:45.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.646 issued rwts: total=2043,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.647 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:45.647 job1: (groupid=0, jobs=1): err= 0: pid=245465: Thu Dec 5 20:31:38 2024 00:11:45.647 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:45.647 slat (nsec): min=7463, max=34327, avg=9359.61, stdev=1478.03 00:11:45.647 clat (usec): min=171, max=610, avg=236.45, stdev=47.55 00:11:45.647 lat (usec): min=180, max=620, avg=245.81, stdev=47.49 00:11:45.647 clat percentiles (usec): 00:11:45.647 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 204], 00:11:45.647 | 30.00th=[ 210], 40.00th=[ 219], 50.00th=[ 233], 60.00th=[ 241], 00:11:45.647 | 70.00th=[ 249], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[ 281], 00:11:45.647 | 99.00th=[ 478], 99.50th=[ 498], 99.90th=[ 529], 99.95th=[ 562], 00:11:45.647 | 99.99th=[ 611] 00:11:45.647 write: IOPS=2555, BW=9.98MiB/s (10.5MB/s)(9.99MiB/1001msec); 0 zone resets 00:11:45.647 slat (usec): min=10, max=40894, avg=33.47, stdev=840.52 00:11:45.647 clat (usec): min=109, max=278, avg=155.00, stdev=16.35 00:11:45.647 lat (usec): min=121, max=41172, avg=188.47, stdev=843.40 00:11:45.647 clat percentiles (usec): 00:11:45.647 | 1.00th=[ 117], 5.00th=[ 126], 10.00th=[ 137], 20.00th=[ 145], 00:11:45.647 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 159], 00:11:45.647 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 182], 00:11:45.647 | 99.00th=[ 196], 99.50th=[ 204], 99.90th=[ 255], 99.95th=[ 262], 00:11:45.647 | 99.99th=[ 277] 00:11:45.647 bw ( KiB/s): min= 9680, max= 9680, per=34.91%, avg=9680.00, stdev= 0.00, samples=1 00:11:45.647 iops : min= 2420, max= 2420, avg=2420.00, stdev= 0.00, samples=1 00:11:45.647 lat (usec) : 250=87.13%, 500=12.70%, 750=0.17% 00:11:45.647 cpu : usr=5.00%, sys=7.00%, ctx=4609, majf=0, minf=1 00:11:45.647 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:45.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.647 issued rwts: total=2048,2558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.647 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:45.647 job2: (groupid=0, jobs=1): err= 0: pid=245466: Thu Dec 5 20:31:38 2024 00:11:45.647 read: IOPS=26, BW=108KiB/s (110kB/s)(112KiB/1039msec) 00:11:45.647 slat (nsec): min=2997, max=30019, avg=13728.32, stdev=5234.90 00:11:45.647 clat (usec): min=325, max=41071, avg=33352.10, stdev=15788.51 00:11:45.647 lat (usec): min=343, max=41084, avg=33365.83, stdev=15789.50 00:11:45.647 clat percentiles (usec): 00:11:45.647 | 1.00th=[ 326], 5.00th=[ 338], 10.00th=[ 343], 20.00th=[30540], 00:11:45.647 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:45.647 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:45.647 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:45.647 | 99.99th=[41157] 00:11:45.647 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:11:45.647 slat (usec): min=12, max=10751, avg=34.38, stdev=474.56 00:11:45.647 clat (usec): min=134, max=236, avg=165.21, stdev=12.92 00:11:45.647 lat (usec): min=148, max=10987, avg=199.59, stdev=477.88 00:11:45.647 clat percentiles (usec): 00:11:45.647 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:11:45.647 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 167], 00:11:45.647 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 188], 00:11:45.647 | 99.00th=[ 196], 99.50th=[ 200], 99.90th=[ 237], 99.95th=[ 237], 00:11:45.647 | 99.99th=[ 237] 00:11:45.647 bw ( KiB/s): min= 4096, max= 4096, per=14.77%, avg=4096.00, stdev= 0.00, samples=1 00:11:45.647 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:45.647 lat (usec) : 250=94.81%, 500=0.93% 00:11:45.647 lat (msec) : 50=4.26% 00:11:45.647 cpu : usr=0.67%, sys=0.67%, ctx=542, majf=0, minf=1 00:11:45.647 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:45.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.647 issued rwts: total=28,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.647 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:45.647 job3: (groupid=0, jobs=1): err= 0: pid=245467: Thu Dec 5 20:31:38 2024 00:11:45.647 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:45.647 slat (nsec): min=7212, max=44001, avg=8437.14, stdev=1555.11 00:11:45.647 clat (usec): min=184, max=572, avg=291.48, stdev=71.85 00:11:45.647 lat (usec): min=192, max=580, avg=299.92, stdev=72.23 00:11:45.647 clat percentiles (usec): 00:11:45.647 | 1.00th=[ 219], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 247], 00:11:45.647 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 269], 60.00th=[ 277], 00:11:45.647 | 70.00th=[ 289], 80.00th=[ 310], 90.00th=[ 437], 95.00th=[ 490], 00:11:45.647 | 99.00th=[ 510], 99.50th=[ 519], 99.90th=[ 545], 99.95th=[ 562], 00:11:45.647 | 99.99th=[ 570] 00:11:45.647 write: IOPS=2081, BW=8328KiB/s (8528kB/s)(8336KiB/1001msec); 0 zone resets 00:11:45.647 slat (nsec): min=10246, max=53785, avg=11384.97, stdev=1928.02 00:11:45.647 clat (usec): min=119, max=367, avg=167.83, stdev=30.46 00:11:45.647 lat (usec): min=132, max=379, avg=179.21, stdev=30.65 00:11:45.647 clat percentiles (usec): 00:11:45.647 | 1.00th=[ 131], 5.00th=[ 139], 10.00th=[ 145], 20.00th=[ 149], 00:11:45.647 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 161], 00:11:45.647 | 70.00th=[ 167], 80.00th=[ 180], 90.00th=[ 215], 95.00th=[ 235], 00:11:45.647 | 99.00th=[ 293], 99.50th=[ 310], 99.90th=[ 338], 99.95th=[ 343], 00:11:45.647 | 99.99th=[ 367] 00:11:45.647 bw ( KiB/s): min= 9816, max= 9816, per=35.40%, avg=9816.00, stdev= 0.00, samples=1 00:11:45.647 iops : min= 2454, max= 2454, avg=2454.00, stdev= 0.00, samples=1 00:11:45.647 lat (usec) : 250=62.03%, 500=36.59%, 750=1.38% 00:11:45.647 cpu : usr=4.30%, sys=5.80%, ctx=4132, majf=0, minf=2 00:11:45.647 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:45.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.647 issued rwts: total=2048,2084,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.647 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:45.647 00:11:45.647 Run status group 0 (all jobs): 00:11:45.647 READ: bw=23.2MiB/s (24.3MB/s), 108KiB/s-8184KiB/s (110kB/s-8380kB/s), io=24.1MiB (25.3MB), run=1001-1039msec 00:11:45.647 WRITE: bw=27.1MiB/s (28.4MB/s), 1971KiB/s-9.98MiB/s (2018kB/s-10.5MB/s), io=28.1MiB (29.5MB), run=1001-1039msec 00:11:45.647 00:11:45.647 Disk stats (read/write): 00:11:45.647 nvme0n1: ios=1560/1937, merge=0/0, ticks=1237/302, in_queue=1539, util=84.17% 00:11:45.647 nvme0n2: ios=1674/2048, merge=0/0, ticks=661/309, in_queue=970, util=91.12% 00:11:45.647 nvme0n3: ios=74/512, merge=0/0, ticks=1575/83, in_queue=1658, util=95.39% 00:11:45.647 nvme0n4: ios=1604/2048, merge=0/0, ticks=443/321, in_queue=764, util=92.86% 00:11:45.647 20:31:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:45.647 [global] 00:11:45.647 thread=1 00:11:45.647 invalidate=1 00:11:45.647 rw=randwrite 00:11:45.647 time_based=1 00:11:45.647 runtime=1 00:11:45.647 ioengine=libaio 00:11:45.647 direct=1 00:11:45.647 bs=4096 00:11:45.648 iodepth=1 00:11:45.648 norandommap=0 00:11:45.648 numjobs=1 00:11:45.648 00:11:45.648 verify_dump=1 00:11:45.648 verify_backlog=512 00:11:45.648 verify_state_save=0 00:11:45.648 do_verify=1 00:11:45.648 verify=crc32c-intel 00:11:45.648 [job0] 00:11:45.648 filename=/dev/nvme0n1 00:11:45.648 [job1] 00:11:45.648 filename=/dev/nvme0n2 00:11:45.648 [job2] 00:11:45.648 filename=/dev/nvme0n3 00:11:45.648 [job3] 00:11:45.648 filename=/dev/nvme0n4 00:11:45.648 Could not set queue depth (nvme0n1) 00:11:45.648 Could not set queue depth (nvme0n2) 00:11:45.648 Could not set queue depth (nvme0n3) 00:11:45.648 Could not set queue depth (nvme0n4) 00:11:45.906 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:45.906 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:45.906 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:45.906 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:45.906 fio-3.35 00:11:45.906 Starting 4 threads 00:11:47.282 00:11:47.282 job0: (groupid=0, jobs=1): err= 0: pid=245884: Thu Dec 5 20:31:40 2024 00:11:47.282 read: IOPS=1041, BW=4168KiB/s (4268kB/s)(4176KiB/1002msec) 00:11:47.282 slat (nsec): min=6299, max=28075, avg=7614.08, stdev=1792.00 00:11:47.282 clat (usec): min=151, max=41900, avg=735.33, stdev=4547.97 00:11:47.282 lat (usec): min=159, max=41919, avg=742.94, stdev=4549.22 00:11:47.282 clat percentiles (usec): 00:11:47.282 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 174], 00:11:47.282 | 30.00th=[ 182], 40.00th=[ 196], 50.00th=[ 235], 60.00th=[ 243], 00:11:47.282 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 260], 95.00th=[ 265], 00:11:47.282 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:11:47.282 | 99.99th=[41681] 00:11:47.282 write: IOPS=1532, BW=6132KiB/s (6279kB/s)(6144KiB/1002msec); 0 zone resets 00:11:47.282 slat (nsec): min=8176, max=40505, avg=10005.30, stdev=1548.34 00:11:47.282 clat (usec): min=99, max=383, avg=133.58, stdev=17.25 00:11:47.282 lat (usec): min=110, max=395, avg=143.58, stdev=17.83 00:11:47.282 clat percentiles (usec): 00:11:47.282 | 1.00th=[ 109], 5.00th=[ 113], 10.00th=[ 116], 20.00th=[ 121], 00:11:47.282 | 30.00th=[ 125], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 137], 00:11:47.282 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 153], 95.00th=[ 159], 00:11:47.282 | 99.00th=[ 174], 99.50th=[ 180], 99.90th=[ 334], 99.95th=[ 383], 00:11:47.282 | 99.99th=[ 383] 00:11:47.282 bw ( KiB/s): min=12288, max=12288, per=73.93%, avg=12288.00, stdev= 0.00, samples=1 00:11:47.282 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:47.282 lat (usec) : 100=0.04%, 250=89.53%, 500=9.88% 00:11:47.282 lat (msec) : 10=0.04%, 50=0.50% 00:11:47.282 cpu : usr=1.10%, sys=2.50%, ctx=2580, majf=0, minf=1 00:11:47.282 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:47.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.282 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.282 issued rwts: total=1044,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:47.282 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:47.282 job1: (groupid=0, jobs=1): err= 0: pid=245885: Thu Dec 5 20:31:40 2024 00:11:47.282 read: IOPS=198, BW=795KiB/s (814kB/s)(808KiB/1016msec) 00:11:47.282 slat (nsec): min=6833, max=24419, avg=9170.04, stdev=4488.19 00:11:47.282 clat (usec): min=171, max=41412, avg=4485.62, stdev=12459.96 00:11:47.282 lat (usec): min=186, max=41426, avg=4494.79, stdev=12464.08 00:11:47.282 clat percentiles (usec): 00:11:47.282 | 1.00th=[ 202], 5.00th=[ 221], 10.00th=[ 229], 20.00th=[ 235], 00:11:47.282 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:11:47.282 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[40633], 95.00th=[41157], 00:11:47.282 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:47.282 | 99.99th=[41157] 00:11:47.282 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:11:47.282 slat (usec): min=10, max=15595, avg=42.18, stdev=688.72 00:11:47.282 clat (usec): min=122, max=807, avg=164.74, stdev=44.13 00:11:47.282 lat (usec): min=133, max=15822, avg=206.93, stdev=692.88 00:11:47.282 clat percentiles (usec): 00:11:47.282 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 145], 20.00th=[ 149], 00:11:47.282 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:11:47.282 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 180], 95.00th=[ 192], 00:11:47.282 | 99.00th=[ 247], 99.50th=[ 562], 99.90th=[ 807], 99.95th=[ 807], 00:11:47.282 | 99.99th=[ 807] 00:11:47.282 bw ( KiB/s): min= 4096, max= 4096, per=24.64%, avg=4096.00, stdev= 0.00, samples=1 00:11:47.282 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:47.282 lat (usec) : 250=86.13%, 500=10.08%, 750=0.70%, 1000=0.14% 00:11:47.282 lat (msec) : 50=2.94% 00:11:47.282 cpu : usr=0.99%, sys=0.59%, ctx=716, majf=0, minf=1 00:11:47.282 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:47.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.282 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.282 issued rwts: total=202,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:47.282 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:47.282 job2: (groupid=0, jobs=1): err= 0: pid=245886: Thu Dec 5 20:31:40 2024 00:11:47.282 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:47.282 slat (nsec): min=6501, max=41972, avg=8076.78, stdev=2046.11 00:11:47.282 clat (usec): min=165, max=41982, avg=457.81, stdev=3165.75 00:11:47.282 lat (usec): min=172, max=42005, avg=465.88, stdev=3166.79 00:11:47.282 clat percentiles (usec): 00:11:47.282 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 194], 00:11:47.282 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 217], 00:11:47.282 | 70.00th=[ 225], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 265], 00:11:47.282 | 99.00th=[ 359], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:11:47.282 | 99.99th=[42206] 00:11:47.282 write: IOPS=1660, BW=6641KiB/s (6801kB/s)(6648KiB/1001msec); 0 zone resets 00:11:47.282 slat (nsec): min=9525, max=44589, avg=10783.52, stdev=1596.14 00:11:47.282 clat (usec): min=114, max=1076, avg=156.09, stdev=38.00 00:11:47.282 lat (usec): min=124, max=1095, avg=166.87, stdev=38.23 00:11:47.282 clat percentiles (usec): 00:11:47.282 | 1.00th=[ 120], 5.00th=[ 125], 10.00th=[ 127], 20.00th=[ 131], 00:11:47.282 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 151], 00:11:47.282 | 70.00th=[ 174], 80.00th=[ 186], 90.00th=[ 198], 95.00th=[ 210], 00:11:47.282 | 99.00th=[ 247], 99.50th=[ 249], 99.90th=[ 306], 99.95th=[ 1074], 00:11:47.282 | 99.99th=[ 1074] 00:11:47.283 bw ( KiB/s): min= 4096, max= 4096, per=24.64%, avg=4096.00, stdev= 0.00, samples=1 00:11:47.283 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:47.283 lat (usec) : 250=94.31%, 500=5.28%, 750=0.09% 00:11:47.283 lat (msec) : 2=0.03%, 50=0.28% 00:11:47.283 cpu : usr=1.60%, sys=3.30%, ctx=3199, majf=0, minf=1 00:11:47.283 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:47.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.283 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.283 issued rwts: total=1536,1662,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:47.283 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:47.283 job3: (groupid=0, jobs=1): err= 0: pid=245889: Thu Dec 5 20:31:40 2024 00:11:47.283 read: IOPS=386, BW=1547KiB/s (1584kB/s)(1564KiB/1011msec) 00:11:47.283 slat (nsec): min=6637, max=25030, avg=8304.50, stdev=3494.56 00:11:47.283 clat (usec): min=172, max=41948, avg=2311.68, stdev=9034.50 00:11:47.283 lat (usec): min=179, max=41971, avg=2319.99, stdev=9037.57 00:11:47.283 clat percentiles (usec): 00:11:47.283 | 1.00th=[ 178], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 202], 00:11:47.283 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 221], 00:11:47.283 | 70.00th=[ 227], 80.00th=[ 235], 90.00th=[ 243], 95.00th=[40633], 00:11:47.283 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:11:47.283 | 99.99th=[42206] 00:11:47.283 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:11:47.283 slat (nsec): min=9442, max=49761, avg=10634.54, stdev=2398.25 00:11:47.283 clat (usec): min=130, max=407, avg=186.40, stdev=24.21 00:11:47.283 lat (usec): min=140, max=457, avg=197.04, stdev=25.25 00:11:47.283 clat percentiles (usec): 00:11:47.283 | 1.00th=[ 147], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 169], 00:11:47.283 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 188], 00:11:47.283 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 210], 95.00th=[ 241], 00:11:47.283 | 99.00th=[ 251], 99.50th=[ 269], 99.90th=[ 408], 99.95th=[ 408], 00:11:47.283 | 99.99th=[ 408] 00:11:47.283 bw ( KiB/s): min= 4096, max= 4096, per=24.64%, avg=4096.00, stdev= 0.00, samples=1 00:11:47.283 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:47.283 lat (usec) : 250=95.90%, 500=1.88% 00:11:47.283 lat (msec) : 50=2.21% 00:11:47.283 cpu : usr=0.30%, sys=0.99%, ctx=905, majf=0, minf=1 00:11:47.283 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:47.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.283 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.283 issued rwts: total=391,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:47.283 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:47.283 00:11:47.283 Run status group 0 (all jobs): 00:11:47.283 READ: bw=12.2MiB/s (12.8MB/s), 795KiB/s-6138KiB/s (814kB/s-6285kB/s), io=12.4MiB (13.0MB), run=1001-1016msec 00:11:47.283 WRITE: bw=16.2MiB/s (17.0MB/s), 2016KiB/s-6641KiB/s (2064kB/s-6801kB/s), io=16.5MiB (17.3MB), run=1001-1016msec 00:11:47.283 00:11:47.283 Disk stats (read/write): 00:11:47.283 nvme0n1: ios=1090/1536, merge=0/0, ticks=617/197, in_queue=814, util=86.87% 00:11:47.283 nvme0n2: ios=227/512, merge=0/0, ticks=1483/80, in_queue=1563, util=99.90% 00:11:47.283 nvme0n3: ios=1068/1476, merge=0/0, ticks=1507/230, in_queue=1737, util=98.54% 00:11:47.283 nvme0n4: ios=445/512, merge=0/0, ticks=915/94, in_queue=1009, util=98.32% 00:11:47.283 20:31:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:47.283 [global] 00:11:47.283 thread=1 00:11:47.283 invalidate=1 00:11:47.283 rw=write 00:11:47.283 time_based=1 00:11:47.283 runtime=1 00:11:47.283 ioengine=libaio 00:11:47.283 direct=1 00:11:47.283 bs=4096 00:11:47.283 iodepth=128 00:11:47.283 norandommap=0 00:11:47.283 numjobs=1 00:11:47.283 00:11:47.283 verify_dump=1 00:11:47.283 verify_backlog=512 00:11:47.283 verify_state_save=0 00:11:47.283 do_verify=1 00:11:47.283 verify=crc32c-intel 00:11:47.283 [job0] 00:11:47.283 filename=/dev/nvme0n1 00:11:47.283 [job1] 00:11:47.283 filename=/dev/nvme0n2 00:11:47.283 [job2] 00:11:47.283 filename=/dev/nvme0n3 00:11:47.283 [job3] 00:11:47.283 filename=/dev/nvme0n4 00:11:47.283 Could not set queue depth (nvme0n1) 00:11:47.283 Could not set queue depth (nvme0n2) 00:11:47.283 Could not set queue depth (nvme0n3) 00:11:47.283 Could not set queue depth (nvme0n4) 00:11:47.542 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:47.542 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:47.542 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:47.542 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:47.542 fio-3.35 00:11:47.542 Starting 4 threads 00:11:48.918 00:11:48.918 job0: (groupid=0, jobs=1): err= 0: pid=246309: Thu Dec 5 20:31:42 2024 00:11:48.918 read: IOPS=6400, BW=25.0MiB/s (26.2MB/s)(25.1MiB/1002msec) 00:11:48.918 slat (nsec): min=1184, max=9264.6k, avg=79340.39, stdev=515162.83 00:11:48.918 clat (usec): min=1599, max=19080, avg=9868.00, stdev=2042.54 00:11:48.918 lat (usec): min=1606, max=19090, avg=9947.34, stdev=2076.68 00:11:48.918 clat percentiles (usec): 00:11:48.918 | 1.00th=[ 4146], 5.00th=[ 7177], 10.00th=[ 7767], 20.00th=[ 8848], 00:11:48.918 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9765], 00:11:48.918 | 70.00th=[10028], 80.00th=[10814], 90.00th=[12387], 95.00th=[14353], 00:11:48.918 | 99.00th=[16581], 99.50th=[17171], 99.90th=[17695], 99.95th=[18220], 00:11:48.918 | 99.99th=[19006] 00:11:48.918 write: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec); 0 zone resets 00:11:48.918 slat (nsec): min=1930, max=9170.2k, avg=68315.57, stdev=377317.26 00:11:48.918 clat (usec): min=1963, max=23872, avg=9541.18, stdev=2076.03 00:11:48.918 lat (usec): min=1988, max=23875, avg=9609.49, stdev=2110.76 00:11:48.918 clat percentiles (usec): 00:11:48.918 | 1.00th=[ 3523], 5.00th=[ 5932], 10.00th=[ 7308], 20.00th=[ 8455], 00:11:48.918 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9765], 00:11:48.918 | 70.00th=[ 9896], 80.00th=[10028], 90.00th=[12256], 95.00th=[13304], 00:11:48.918 | 99.00th=[15008], 99.50th=[16319], 99.90th=[19006], 99.95th=[19792], 00:11:48.918 | 99.99th=[23987] 00:11:48.918 bw ( KiB/s): min=24888, max=28360, per=35.36%, avg=26624.00, stdev=2455.07, samples=2 00:11:48.918 iops : min= 6222, max= 7090, avg=6656.00, stdev=613.77, samples=2 00:11:48.918 lat (msec) : 2=0.18%, 4=1.15%, 10=73.94%, 20=24.72%, 50=0.01% 00:11:48.918 cpu : usr=3.90%, sys=6.39%, ctx=754, majf=0, minf=1 00:11:48.918 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:11:48.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.918 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:48.918 issued rwts: total=6413,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:48.918 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:48.918 job1: (groupid=0, jobs=1): err= 0: pid=246310: Thu Dec 5 20:31:42 2024 00:11:48.918 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:11:48.918 slat (nsec): min=1313, max=14078k, avg=155163.86, stdev=1061090.70 00:11:48.918 clat (usec): min=3020, max=53999, avg=19802.83, stdev=13842.28 00:11:48.918 lat (usec): min=3887, max=54005, avg=19958.00, stdev=13927.06 00:11:48.918 clat percentiles (usec): 00:11:48.918 | 1.00th=[ 4359], 5.00th=[ 6849], 10.00th=[ 8979], 20.00th=[ 9503], 00:11:48.918 | 30.00th=[11207], 40.00th=[12387], 50.00th=[16057], 60.00th=[16581], 00:11:48.918 | 70.00th=[18744], 80.00th=[24511], 90.00th=[47449], 95.00th=[51643], 00:11:48.918 | 99.00th=[52691], 99.50th=[52691], 99.90th=[53740], 99.95th=[53740], 00:11:48.918 | 99.99th=[53740] 00:11:48.918 write: IOPS=3494, BW=13.6MiB/s (14.3MB/s)(13.7MiB/1004msec); 0 zone resets 00:11:48.918 slat (usec): min=2, max=9541, avg=136.78, stdev=773.76 00:11:48.918 clat (usec): min=313, max=59690, avg=18972.32, stdev=14943.48 00:11:48.918 lat (usec): min=1378, max=59700, avg=19109.10, stdev=15048.69 00:11:48.918 clat percentiles (usec): 00:11:48.918 | 1.00th=[ 2671], 5.00th=[ 4621], 10.00th=[ 6587], 20.00th=[ 8717], 00:11:48.918 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[12649], 00:11:48.918 | 70.00th=[26870], 80.00th=[33162], 90.00th=[45351], 95.00th=[49546], 00:11:48.918 | 99.00th=[56886], 99.50th=[58983], 99.90th=[59507], 99.95th=[59507], 00:11:48.918 | 99.99th=[59507] 00:11:48.918 bw ( KiB/s): min=12096, max=14944, per=17.95%, avg=13520.00, stdev=2013.84, samples=2 00:11:48.918 iops : min= 3024, max= 3736, avg=3380.00, stdev=503.46, samples=2 00:11:48.918 lat (usec) : 500=0.02% 00:11:48.918 lat (msec) : 2=0.44%, 4=1.05%, 10=38.25%, 20=29.57%, 50=25.06% 00:11:48.918 lat (msec) : 100=5.61% 00:11:48.918 cpu : usr=2.89%, sys=4.29%, ctx=311, majf=0, minf=2 00:11:48.918 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:11:48.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.918 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:48.918 issued rwts: total=3072,3508,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:48.918 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:48.918 job2: (groupid=0, jobs=1): err= 0: pid=246311: Thu Dec 5 20:31:42 2024 00:11:48.918 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:11:48.918 slat (nsec): min=1089, max=9522.8k, avg=104122.45, stdev=665508.90 00:11:48.918 clat (usec): min=3879, max=30999, avg=13604.73, stdev=3761.71 00:11:48.918 lat (usec): min=3886, max=31899, avg=13708.85, stdev=3805.70 00:11:48.918 clat percentiles (usec): 00:11:48.918 | 1.00th=[ 6194], 5.00th=[ 9241], 10.00th=[10290], 20.00th=[10552], 00:11:48.918 | 30.00th=[10945], 40.00th=[11863], 50.00th=[13173], 60.00th=[13698], 00:11:48.918 | 70.00th=[15533], 80.00th=[16319], 90.00th=[18220], 95.00th=[20579], 00:11:48.918 | 99.00th=[25035], 99.50th=[28181], 99.90th=[29230], 99.95th=[31065], 00:11:48.918 | 99.99th=[31065] 00:11:48.918 write: IOPS=4651, BW=18.2MiB/s (19.1MB/s)(18.3MiB/1006msec); 0 zone resets 00:11:48.918 slat (nsec): min=1736, max=33030k, avg=102799.58, stdev=793948.31 00:11:48.918 clat (usec): min=1157, max=35112, avg=12934.68, stdev=4531.83 00:11:48.918 lat (usec): min=1168, max=35116, avg=13037.48, stdev=4595.81 00:11:48.918 clat percentiles (usec): 00:11:48.918 | 1.00th=[ 3654], 5.00th=[ 7898], 10.00th=[ 8979], 20.00th=[10290], 00:11:48.918 | 30.00th=[10552], 40.00th=[11338], 50.00th=[12649], 60.00th=[13042], 00:11:48.918 | 70.00th=[13435], 80.00th=[14091], 90.00th=[17957], 95.00th=[22414], 00:11:48.918 | 99.00th=[30802], 99.50th=[31327], 99.90th=[34866], 99.95th=[34866], 00:11:48.918 | 99.99th=[34866] 00:11:48.918 bw ( KiB/s): min=16272, max=20592, per=24.48%, avg=18432.00, stdev=3054.70, samples=2 00:11:48.918 iops : min= 4068, max= 5148, avg=4608.00, stdev=763.68, samples=2 00:11:48.918 lat (msec) : 2=0.03%, 4=0.70%, 10=10.40%, 20=81.72%, 50=7.15% 00:11:48.918 cpu : usr=3.48%, sys=5.27%, ctx=394, majf=0, minf=1 00:11:48.918 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:48.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.918 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:48.918 issued rwts: total=4608,4679,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:48.918 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:48.918 job3: (groupid=0, jobs=1): err= 0: pid=246312: Thu Dec 5 20:31:42 2024 00:11:48.918 read: IOPS=3915, BW=15.3MiB/s (16.0MB/s)(15.4MiB/1005msec) 00:11:48.918 slat (nsec): min=1058, max=20417k, avg=133483.14, stdev=983201.76 00:11:48.918 clat (usec): min=3878, max=53666, avg=16435.64, stdev=10813.65 00:11:48.918 lat (usec): min=3884, max=54995, avg=16569.13, stdev=10894.52 00:11:48.918 clat percentiles (usec): 00:11:48.918 | 1.00th=[ 4621], 5.00th=[ 9110], 10.00th=[10028], 20.00th=[10552], 00:11:48.918 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11863], 60.00th=[14615], 00:11:48.918 | 70.00th=[15664], 80.00th=[17695], 90.00th=[30540], 95.00th=[46924], 00:11:48.918 | 99.00th=[51643], 99.50th=[52167], 99.90th=[52167], 99.95th=[52167], 00:11:48.918 | 99.99th=[53740] 00:11:48.919 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:11:48.919 slat (nsec): min=1985, max=17562k, avg=109805.81, stdev=812579.15 00:11:48.919 clat (usec): min=1152, max=50412, avg=15269.03, stdev=8317.59 00:11:48.919 lat (usec): min=1162, max=50445, avg=15378.84, stdev=8387.88 00:11:48.919 clat percentiles (usec): 00:11:48.919 | 1.00th=[ 3982], 5.00th=[ 5932], 10.00th=[ 7832], 20.00th=[ 9765], 00:11:48.919 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11076], 60.00th=[15008], 00:11:48.919 | 70.00th=[16581], 80.00th=[20055], 90.00th=[28181], 95.00th=[33424], 00:11:48.919 | 99.00th=[39584], 99.50th=[43254], 99.90th=[44827], 99.95th=[44827], 00:11:48.919 | 99.99th=[50594] 00:11:48.919 bw ( KiB/s): min=16344, max=16424, per=21.76%, avg=16384.00, stdev=56.57, samples=2 00:11:48.919 iops : min= 4086, max= 4106, avg=4096.00, stdev=14.14, samples=2 00:11:48.919 lat (msec) : 2=0.02%, 4=0.66%, 10=16.60%, 20=65.14%, 50=15.65% 00:11:48.919 lat (msec) : 100=1.93% 00:11:48.919 cpu : usr=3.29%, sys=4.28%, ctx=351, majf=0, minf=1 00:11:48.919 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:48.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:48.919 issued rwts: total=3935,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:48.919 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:48.919 00:11:48.919 Run status group 0 (all jobs): 00:11:48.919 READ: bw=70.0MiB/s (73.4MB/s), 12.0MiB/s-25.0MiB/s (12.5MB/s-26.2MB/s), io=70.4MiB (73.8MB), run=1002-1006msec 00:11:48.919 WRITE: bw=73.5MiB/s (77.1MB/s), 13.6MiB/s-25.9MiB/s (14.3MB/s-27.2MB/s), io=74.0MiB (77.6MB), run=1002-1006msec 00:11:48.919 00:11:48.919 Disk stats (read/write): 00:11:48.919 nvme0n1: ios=5493/5632, merge=0/0, ticks=42185/39069, in_queue=81254, util=97.90% 00:11:48.919 nvme0n2: ios=2097/2532, merge=0/0, ticks=29071/49967, in_queue=79038, util=94.82% 00:11:48.919 nvme0n3: ios=3846/4096, merge=0/0, ticks=36901/35465, in_queue=72366, util=97.40% 00:11:48.919 nvme0n4: ios=3642/3981, merge=0/0, ticks=36248/34140, in_queue=70388, util=98.22% 00:11:48.919 20:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:48.919 [global] 00:11:48.919 thread=1 00:11:48.919 invalidate=1 00:11:48.919 rw=randwrite 00:11:48.919 time_based=1 00:11:48.919 runtime=1 00:11:48.919 ioengine=libaio 00:11:48.919 direct=1 00:11:48.919 bs=4096 00:11:48.919 iodepth=128 00:11:48.919 norandommap=0 00:11:48.919 numjobs=1 00:11:48.919 00:11:48.919 verify_dump=1 00:11:48.919 verify_backlog=512 00:11:48.919 verify_state_save=0 00:11:48.919 do_verify=1 00:11:48.919 verify=crc32c-intel 00:11:48.919 [job0] 00:11:48.919 filename=/dev/nvme0n1 00:11:48.919 [job1] 00:11:48.919 filename=/dev/nvme0n2 00:11:48.919 [job2] 00:11:48.919 filename=/dev/nvme0n3 00:11:48.919 [job3] 00:11:48.919 filename=/dev/nvme0n4 00:11:48.919 Could not set queue depth (nvme0n1) 00:11:48.919 Could not set queue depth (nvme0n2) 00:11:48.919 Could not set queue depth (nvme0n3) 00:11:48.919 Could not set queue depth (nvme0n4) 00:11:49.177 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:49.177 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:49.177 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:49.177 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:49.177 fio-3.35 00:11:49.177 Starting 4 threads 00:11:50.581 00:11:50.581 job0: (groupid=0, jobs=1): err= 0: pid=246731: Thu Dec 5 20:31:43 2024 00:11:50.581 read: IOPS=4188, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1008msec) 00:11:50.581 slat (nsec): min=961, max=22606k, avg=113479.87, stdev=854365.89 00:11:50.581 clat (usec): min=1554, max=53634, avg=14752.87, stdev=6749.89 00:11:50.581 lat (usec): min=2476, max=53650, avg=14866.35, stdev=6811.03 00:11:50.581 clat percentiles (usec): 00:11:50.581 | 1.00th=[ 5800], 5.00th=[ 8455], 10.00th=[ 9765], 20.00th=[10683], 00:11:50.581 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[12649], 00:11:50.581 | 70.00th=[16450], 80.00th=[20841], 90.00th=[24511], 95.00th=[28443], 00:11:50.581 | 99.00th=[38536], 99.50th=[38536], 99.90th=[38536], 99.95th=[40109], 00:11:50.581 | 99.99th=[53740] 00:11:50.581 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:11:50.581 slat (nsec): min=1752, max=14357k, avg=106211.17, stdev=752421.25 00:11:50.581 clat (usec): min=478, max=51680, avg=14213.42, stdev=7352.67 00:11:50.581 lat (usec): min=516, max=51690, avg=14319.63, stdev=7403.87 00:11:50.581 clat percentiles (usec): 00:11:50.581 | 1.00th=[ 4948], 5.00th=[ 5997], 10.00th=[ 8029], 20.00th=[ 9372], 00:11:50.581 | 30.00th=[10159], 40.00th=[10945], 50.00th=[11469], 60.00th=[14484], 00:11:50.581 | 70.00th=[17171], 80.00th=[18482], 90.00th=[20317], 95.00th=[25822], 00:11:50.581 | 99.00th=[50070], 99.50th=[51643], 99.90th=[51643], 99.95th=[51643], 00:11:50.581 | 99.99th=[51643] 00:11:50.581 bw ( KiB/s): min=16384, max=20464, per=23.50%, avg=18424.00, stdev=2885.00, samples=2 00:11:50.581 iops : min= 4096, max= 5116, avg=4606.00, stdev=721.25, samples=2 00:11:50.581 lat (usec) : 500=0.01%, 750=0.01% 00:11:50.581 lat (msec) : 2=0.03%, 4=0.57%, 10=18.81%, 20=64.35%, 50=15.67% 00:11:50.581 lat (msec) : 100=0.54% 00:11:50.581 cpu : usr=2.28%, sys=4.57%, ctx=321, majf=0, minf=1 00:11:50.581 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:50.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:50.581 issued rwts: total=4222,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.581 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:50.581 job1: (groupid=0, jobs=1): err= 0: pid=246732: Thu Dec 5 20:31:43 2024 00:11:50.581 read: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec) 00:11:50.581 slat (nsec): min=1275, max=11058k, avg=105296.77, stdev=768910.36 00:11:50.581 clat (usec): min=3610, max=55768, avg=12389.30, stdev=4343.77 00:11:50.581 lat (usec): min=3616, max=55774, avg=12494.60, stdev=4425.43 00:11:50.581 clat percentiles (usec): 00:11:50.581 | 1.00th=[ 5276], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[ 9765], 00:11:50.581 | 30.00th=[10421], 40.00th=[10683], 50.00th=[11338], 60.00th=[11600], 00:11:50.581 | 70.00th=[12780], 80.00th=[14746], 90.00th=[17695], 95.00th=[19268], 00:11:50.581 | 99.00th=[26084], 99.50th=[39060], 99.90th=[55837], 99.95th=[55837], 00:11:50.581 | 99.99th=[55837] 00:11:50.581 write: IOPS=5038, BW=19.7MiB/s (20.6MB/s)(19.9MiB/1011msec); 0 zone resets 00:11:50.581 slat (usec): min=2, max=16583, avg=96.81, stdev=644.60 00:11:50.581 clat (usec): min=2264, max=85618, avg=13948.87, stdev=11883.06 00:11:50.581 lat (usec): min=2274, max=86099, avg=14045.68, stdev=11946.61 00:11:50.581 clat percentiles (usec): 00:11:50.581 | 1.00th=[ 3359], 5.00th=[ 5538], 10.00th=[ 7767], 20.00th=[ 9372], 00:11:50.581 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[11207], 00:11:50.581 | 70.00th=[11731], 80.00th=[16712], 90.00th=[19530], 95.00th=[39584], 00:11:50.581 | 99.00th=[73925], 99.50th=[83362], 99.90th=[85459], 99.95th=[85459], 00:11:50.581 | 99.99th=[85459] 00:11:50.581 bw ( KiB/s): min=15152, max=24576, per=25.34%, avg=19864.00, stdev=6663.77, samples=2 00:11:50.581 iops : min= 3788, max= 6144, avg=4966.00, stdev=1665.94, samples=2 00:11:50.581 lat (msec) : 4=1.37%, 10=34.39%, 20=58.80%, 50=3.24%, 100=2.20% 00:11:50.581 cpu : usr=3.76%, sys=4.55%, ctx=577, majf=0, minf=1 00:11:50.581 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:50.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:50.582 issued rwts: total=4608,5094,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.582 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:50.582 job2: (groupid=0, jobs=1): err= 0: pid=246733: Thu Dec 5 20:31:43 2024 00:11:50.582 read: IOPS=4937, BW=19.3MiB/s (20.2MB/s)(19.3MiB/1001msec) 00:11:50.582 slat (nsec): min=1116, max=12127k, avg=102851.68, stdev=714884.60 00:11:50.582 clat (usec): min=470, max=29934, avg=13156.14, stdev=3931.90 00:11:50.582 lat (usec): min=3805, max=29947, avg=13258.99, stdev=3976.44 00:11:50.582 clat percentiles (usec): 00:11:50.582 | 1.00th=[ 4178], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[10683], 00:11:50.582 | 30.00th=[11076], 40.00th=[11207], 50.00th=[12518], 60.00th=[12911], 00:11:50.582 | 70.00th=[13435], 80.00th=[15795], 90.00th=[18744], 95.00th=[21365], 00:11:50.582 | 99.00th=[25297], 99.50th=[25560], 99.90th=[26346], 99.95th=[27132], 00:11:50.582 | 99.99th=[30016] 00:11:50.582 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:11:50.582 slat (nsec): min=1975, max=9055.6k, avg=82566.55, stdev=437997.28 00:11:50.582 clat (usec): min=2125, max=47543, avg=12089.59, stdev=4886.64 00:11:50.582 lat (usec): min=2134, max=47546, avg=12172.16, stdev=4915.42 00:11:50.582 clat percentiles (usec): 00:11:50.582 | 1.00th=[ 4080], 5.00th=[ 6456], 10.00th=[ 7570], 20.00th=[ 9765], 00:11:50.582 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:11:50.582 | 70.00th=[12125], 80.00th=[13173], 90.00th=[16909], 95.00th=[22676], 00:11:50.582 | 99.00th=[30016], 99.50th=[34341], 99.90th=[47449], 99.95th=[47449], 00:11:50.582 | 99.99th=[47449] 00:11:50.582 bw ( KiB/s): min=20480, max=20480, per=26.12%, avg=20480.00, stdev= 0.00, samples=1 00:11:50.582 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:11:50.582 lat (usec) : 500=0.01% 00:11:50.582 lat (msec) : 4=0.66%, 10=16.48%, 20=76.18%, 50=6.68% 00:11:50.582 cpu : usr=3.00%, sys=5.90%, ctx=536, majf=0, minf=2 00:11:50.582 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:50.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:50.582 issued rwts: total=4942,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.582 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:50.582 job3: (groupid=0, jobs=1): err= 0: pid=246734: Thu Dec 5 20:31:43 2024 00:11:50.582 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:11:50.582 slat (nsec): min=1438, max=6553.1k, avg=105817.33, stdev=495345.51 00:11:50.582 clat (usec): min=8258, max=25983, avg=13609.07, stdev=3576.99 00:11:50.582 lat (usec): min=8386, max=25994, avg=13714.89, stdev=3581.53 00:11:50.582 clat percentiles (usec): 00:11:50.582 | 1.00th=[ 9241], 5.00th=[10028], 10.00th=[10683], 20.00th=[11207], 00:11:50.582 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12518], 60.00th=[13042], 00:11:50.582 | 70.00th=[13435], 80.00th=[14877], 90.00th=[20841], 95.00th=[21890], 00:11:50.582 | 99.00th=[23725], 99.50th=[25822], 99.90th=[26084], 99.95th=[26084], 00:11:50.582 | 99.99th=[26084] 00:11:50.582 write: IOPS=4977, BW=19.4MiB/s (20.4MB/s)(19.5MiB/1003msec); 0 zone resets 00:11:50.582 slat (usec): min=2, max=5189, avg=97.78, stdev=508.70 00:11:50.582 clat (usec): min=312, max=21705, avg=12813.78, stdev=3142.11 00:11:50.582 lat (usec): min=2473, max=23484, avg=12911.57, stdev=3129.28 00:11:50.582 clat percentiles (usec): 00:11:50.582 | 1.00th=[ 5604], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[11076], 00:11:50.582 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12518], 00:11:50.582 | 70.00th=[13304], 80.00th=[14091], 90.00th=[18482], 95.00th=[20055], 00:11:50.582 | 99.00th=[20579], 99.50th=[21365], 99.90th=[21627], 99.95th=[21627], 00:11:50.582 | 99.99th=[21627] 00:11:50.582 bw ( KiB/s): min=17240, max=21672, per=24.82%, avg=19456.00, stdev=3133.90, samples=2 00:11:50.582 iops : min= 4310, max= 5418, avg=4864.00, stdev=783.47, samples=2 00:11:50.582 lat (usec) : 500=0.01% 00:11:50.582 lat (msec) : 4=0.33%, 10=8.19%, 20=82.26%, 50=9.21% 00:11:50.582 cpu : usr=3.29%, sys=5.49%, ctx=498, majf=0, minf=1 00:11:50.582 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:50.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:50.582 issued rwts: total=4608,4992,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.582 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:50.582 00:11:50.582 Run status group 0 (all jobs): 00:11:50.582 READ: bw=71.0MiB/s (74.5MB/s), 16.4MiB/s-19.3MiB/s (17.2MB/s-20.2MB/s), io=71.8MiB (75.3MB), run=1001-1011msec 00:11:50.582 WRITE: bw=76.6MiB/s (80.3MB/s), 17.9MiB/s-20.0MiB/s (18.7MB/s-20.9MB/s), io=77.4MiB (81.2MB), run=1001-1011msec 00:11:50.582 00:11:50.582 Disk stats (read/write): 00:11:50.582 nvme0n1: ios=3666/4096, merge=0/0, ticks=34687/32379, in_queue=67066, util=90.08% 00:11:50.582 nvme0n2: ios=4117/4487, merge=0/0, ticks=49077/55605, in_queue=104682, util=94.11% 00:11:50.582 nvme0n3: ios=4153/4246, merge=0/0, ticks=50343/46733, in_queue=97076, util=94.59% 00:11:50.582 nvme0n4: ios=3854/4096, merge=0/0, ticks=14146/12818, in_queue=26964, util=99.79% 00:11:50.582 20:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:50.582 20:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=246996 00:11:50.582 20:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:50.582 20:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:50.582 [global] 00:11:50.582 thread=1 00:11:50.582 invalidate=1 00:11:50.582 rw=read 00:11:50.582 time_based=1 00:11:50.582 runtime=10 00:11:50.582 ioengine=libaio 00:11:50.582 direct=1 00:11:50.582 bs=4096 00:11:50.582 iodepth=1 00:11:50.582 norandommap=1 00:11:50.582 numjobs=1 00:11:50.582 00:11:50.582 [job0] 00:11:50.582 filename=/dev/nvme0n1 00:11:50.582 [job1] 00:11:50.582 filename=/dev/nvme0n2 00:11:50.582 [job2] 00:11:50.582 filename=/dev/nvme0n3 00:11:50.582 [job3] 00:11:50.582 filename=/dev/nvme0n4 00:11:50.582 Could not set queue depth (nvme0n1) 00:11:50.582 Could not set queue depth (nvme0n2) 00:11:50.582 Could not set queue depth (nvme0n3) 00:11:50.582 Could not set queue depth (nvme0n4) 00:11:50.842 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:50.842 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:50.842 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:50.842 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:50.842 fio-3.35 00:11:50.842 Starting 4 threads 00:11:53.369 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:53.627 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=47624192, buflen=4096 00:11:53.627 fio: pid=247157, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:53.627 20:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:53.627 20:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:53.627 20:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:53.885 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=4349952, buflen=4096 00:11:53.885 fio: pid=247156, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:53.885 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=52834304, buflen=4096 00:11:53.885 fio: pid=247154, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:53.885 20:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:53.885 20:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:54.142 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=54558720, buflen=4096 00:11:54.142 fio: pid=247155, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:54.142 20:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:54.143 20:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:54.143 00:11:54.143 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=247154: Thu Dec 5 20:31:47 2024 00:11:54.143 read: IOPS=4228, BW=16.5MiB/s (17.3MB/s)(50.4MiB/3051msec) 00:11:54.143 slat (usec): min=4, max=11667, avg= 9.48, stdev=118.89 00:11:54.143 clat (usec): min=170, max=29993, avg=222.79, stdev=263.18 00:11:54.143 lat (usec): min=176, max=30001, avg=232.27, stdev=289.26 00:11:54.143 clat percentiles (usec): 00:11:54.143 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 204], 00:11:54.143 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 223], 00:11:54.143 | 70.00th=[ 227], 80.00th=[ 235], 90.00th=[ 245], 95.00th=[ 258], 00:11:54.143 | 99.00th=[ 293], 99.50th=[ 306], 99.90th=[ 498], 99.95th=[ 523], 00:11:54.143 | 99.99th=[ 562] 00:11:54.143 bw ( KiB/s): min=15864, max=17784, per=35.90%, avg=17238.40, stdev=796.66, samples=5 00:11:54.143 iops : min= 3966, max= 4446, avg=4309.60, stdev=199.17, samples=5 00:11:54.143 lat (usec) : 250=92.48%, 500=7.43%, 750=0.08% 00:11:54.143 lat (msec) : 50=0.01% 00:11:54.143 cpu : usr=2.49%, sys=6.56%, ctx=12903, majf=0, minf=1 00:11:54.143 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:54.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.143 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.143 issued rwts: total=12900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:54.143 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:54.143 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=247155: Thu Dec 5 20:31:47 2024 00:11:54.143 read: IOPS=4110, BW=16.1MiB/s (16.8MB/s)(52.0MiB/3241msec) 00:11:54.143 slat (usec): min=6, max=28548, avg=13.54, stdev=317.79 00:11:54.143 clat (usec): min=159, max=41212, avg=225.93, stdev=610.33 00:11:54.143 lat (usec): min=168, max=41223, avg=239.47, stdev=689.51 00:11:54.143 clat percentiles (usec): 00:11:54.143 | 1.00th=[ 182], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 204], 00:11:54.143 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 221], 00:11:54.143 | 70.00th=[ 225], 80.00th=[ 229], 90.00th=[ 237], 95.00th=[ 243], 00:11:54.143 | 99.00th=[ 258], 99.50th=[ 265], 99.90th=[ 367], 99.95th=[ 506], 00:11:54.143 | 99.99th=[40633] 00:11:54.143 bw ( KiB/s): min=14496, max=17656, per=35.26%, avg=16933.00, stdev=1224.22, samples=6 00:11:54.143 iops : min= 3624, max= 4414, avg=4233.17, stdev=306.06, samples=6 00:11:54.143 lat (usec) : 250=97.85%, 500=2.09%, 750=0.03% 00:11:54.143 lat (msec) : 50=0.02% 00:11:54.143 cpu : usr=2.04%, sys=6.94%, ctx=13326, majf=0, minf=2 00:11:54.143 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:54.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.143 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.143 issued rwts: total=13321,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:54.143 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:54.143 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=247156: Thu Dec 5 20:31:47 2024 00:11:54.143 read: IOPS=370, BW=1481KiB/s (1516kB/s)(4248KiB/2869msec) 00:11:54.143 slat (usec): min=6, max=123, avg= 9.43, stdev= 5.28 00:11:54.143 clat (usec): min=175, max=42065, avg=2669.99, stdev=9660.52 00:11:54.143 lat (usec): min=183, max=42079, avg=2679.41, stdev=9661.89 00:11:54.143 clat percentiles (usec): 00:11:54.143 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 208], 00:11:54.143 | 30.00th=[ 217], 40.00th=[ 225], 50.00th=[ 243], 60.00th=[ 258], 00:11:54.143 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 314], 95.00th=[41157], 00:11:54.143 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:54.143 | 99.99th=[42206] 00:11:54.143 bw ( KiB/s): min= 96, max= 8024, per=3.51%, avg=1684.80, stdev=3543.72, samples=5 00:11:54.143 iops : min= 24, max= 2006, avg=421.20, stdev=885.93, samples=5 00:11:54.143 lat (usec) : 250=54.66%, 500=39.23%, 750=0.09% 00:11:54.143 lat (msec) : 50=5.93% 00:11:54.143 cpu : usr=0.21%, sys=0.42%, ctx=1065, majf=0, minf=2 00:11:54.143 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:54.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.143 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.143 issued rwts: total=1063,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:54.143 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:54.143 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=247157: Thu Dec 5 20:31:47 2024 00:11:54.143 read: IOPS=4353, BW=17.0MiB/s (17.8MB/s)(45.4MiB/2671msec) 00:11:54.143 slat (nsec): min=6316, max=36240, avg=7431.60, stdev=931.81 00:11:54.143 clat (usec): min=180, max=40866, avg=219.46, stdev=377.26 00:11:54.143 lat (usec): min=188, max=40873, avg=226.89, stdev=377.26 00:11:54.143 clat percentiles (usec): 00:11:54.143 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 198], 20.00th=[ 204], 00:11:54.143 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 219], 00:11:54.143 | 70.00th=[ 223], 80.00th=[ 229], 90.00th=[ 235], 95.00th=[ 241], 00:11:54.143 | 99.00th=[ 253], 99.50th=[ 260], 99.90th=[ 273], 99.95th=[ 281], 00:11:54.143 | 99.99th=[ 412] 00:11:54.143 bw ( KiB/s): min=16208, max=17952, per=36.57%, avg=17563.20, stdev=758.83, samples=5 00:11:54.143 iops : min= 4052, max= 4488, avg=4390.80, stdev=189.71, samples=5 00:11:54.143 lat (usec) : 250=98.49%, 500=1.50% 00:11:54.143 lat (msec) : 50=0.01% 00:11:54.143 cpu : usr=1.01%, sys=4.01%, ctx=11629, majf=0, minf=2 00:11:54.143 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:54.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.143 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.143 issued rwts: total=11628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:54.143 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:54.143 00:11:54.143 Run status group 0 (all jobs): 00:11:54.143 READ: bw=46.9MiB/s (49.2MB/s), 1481KiB/s-17.0MiB/s (1516kB/s-17.8MB/s), io=152MiB (159MB), run=2671-3241msec 00:11:54.143 00:11:54.143 Disk stats (read/write): 00:11:54.143 nvme0n1: ios=12317/0, merge=0/0, ticks=2583/0, in_queue=2583, util=95.43% 00:11:54.143 nvme0n2: ios=13135/0, merge=0/0, ticks=2787/0, in_queue=2787, util=94.96% 00:11:54.143 nvme0n3: ios=1105/0, merge=0/0, ticks=3665/0, in_queue=3665, util=99.09% 00:11:54.143 nvme0n4: ios=11435/0, merge=0/0, ticks=2444/0, in_queue=2444, util=96.49% 00:11:54.401 20:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:54.401 20:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:54.658 20:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:54.658 20:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:54.658 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:54.658 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:54.915 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:54.915 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:55.172 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:55.173 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 246996 00:11:55.173 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:55.173 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:55.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.173 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:55.173 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:55.173 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:55.173 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:55.173 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:55.173 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:55.173 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:55.173 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:55.173 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:55.173 nvmf hotplug test: fio failed as expected 00:11:55.173 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:55.431 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:55.431 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:55.431 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:55.431 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:55.431 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:55.431 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:55.431 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:55.431 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:55.431 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:55.431 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:55.431 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:55.431 rmmod nvme_tcp 00:11:55.431 rmmod nvme_fabrics 00:11:55.431 rmmod nvme_keyring 00:11:55.431 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:55.431 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:55.431 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:55.431 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 243682 ']' 00:11:55.431 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 243682 00:11:55.431 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 243682 ']' 00:11:55.431 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 243682 00:11:55.431 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:55.431 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:55.431 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 243682 00:11:55.690 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:55.690 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:55.690 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 243682' 00:11:55.691 killing process with pid 243682 00:11:55.691 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 243682 00:11:55.691 20:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 243682 00:11:55.691 20:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:55.691 20:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:55.691 20:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:55.691 20:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:55.691 20:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:55.691 20:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:55.691 20:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:55.691 20:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:55.691 20:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:55.691 20:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.691 20:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.691 20:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.227 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:58.227 00:11:58.227 real 0m27.355s 00:11:58.227 user 2m2.640s 00:11:58.227 sys 0m9.149s 00:11:58.227 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:58.227 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.227 ************************************ 00:11:58.227 END TEST nvmf_fio_target 00:11:58.227 ************************************ 00:11:58.227 20:31:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:58.227 20:31:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:58.227 20:31:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.227 20:31:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:58.227 ************************************ 00:11:58.227 START TEST nvmf_bdevio 00:11:58.228 ************************************ 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:58.228 * Looking for test storage... 00:11:58.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:58.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.228 --rc genhtml_branch_coverage=1 00:11:58.228 --rc genhtml_function_coverage=1 00:11:58.228 --rc genhtml_legend=1 00:11:58.228 --rc geninfo_all_blocks=1 00:11:58.228 --rc geninfo_unexecuted_blocks=1 00:11:58.228 00:11:58.228 ' 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:58.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.228 --rc genhtml_branch_coverage=1 00:11:58.228 --rc genhtml_function_coverage=1 00:11:58.228 --rc genhtml_legend=1 00:11:58.228 --rc geninfo_all_blocks=1 00:11:58.228 --rc geninfo_unexecuted_blocks=1 00:11:58.228 00:11:58.228 ' 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:58.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.228 --rc genhtml_branch_coverage=1 00:11:58.228 --rc genhtml_function_coverage=1 00:11:58.228 --rc genhtml_legend=1 00:11:58.228 --rc geninfo_all_blocks=1 00:11:58.228 --rc geninfo_unexecuted_blocks=1 00:11:58.228 00:11:58.228 ' 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:58.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.228 --rc genhtml_branch_coverage=1 00:11:58.228 --rc genhtml_function_coverage=1 00:11:58.228 --rc genhtml_legend=1 00:11:58.228 --rc geninfo_all_blocks=1 00:11:58.228 --rc geninfo_unexecuted_blocks=1 00:11:58.228 00:11:58.228 ' 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.228 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:58.229 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.229 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:58.229 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:58.229 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:58.229 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:58.229 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:58.229 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:58.229 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:58.229 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:58.229 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:58.229 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:58.229 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:58.229 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:58.229 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:58.229 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:58.229 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:58.229 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:58.229 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:58.229 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:58.229 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:58.229 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.229 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.229 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.229 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:58.229 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:58.229 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:58.229 20:31:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:04.803 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:04.803 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:04.803 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:04.804 Found net devices under 0000:af:00.0: cvl_0_0 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:04.804 Found net devices under 0000:af:00.1: cvl_0_1 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:04.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:04.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:12:04.804 00:12:04.804 --- 10.0.0.2 ping statistics --- 00:12:04.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.804 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:04.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:04.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:12:04.804 00:12:04.804 --- 10.0.0.1 ping statistics --- 00:12:04.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.804 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=251703 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 251703 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 251703 ']' 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:04.804 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:04.804 [2024-12-05 20:31:57.454129] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:12:04.804 [2024-12-05 20:31:57.454168] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.804 [2024-12-05 20:31:57.529155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:04.804 [2024-12-05 20:31:57.567984] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:04.804 [2024-12-05 20:31:57.568017] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:04.804 [2024-12-05 20:31:57.568024] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:04.804 [2024-12-05 20:31:57.568030] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:04.804 [2024-12-05 20:31:57.568034] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:04.804 [2024-12-05 20:31:57.569515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:04.804 [2024-12-05 20:31:57.569626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:04.804 [2024-12-05 20:31:57.569737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:04.804 [2024-12-05 20:31:57.569738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:04.805 [2024-12-05 20:31:57.711646] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:04.805 Malloc0 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:04.805 [2024-12-05 20:31:57.772344] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:04.805 { 00:12:04.805 "params": { 00:12:04.805 "name": "Nvme$subsystem", 00:12:04.805 "trtype": "$TEST_TRANSPORT", 00:12:04.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:04.805 "adrfam": "ipv4", 00:12:04.805 "trsvcid": "$NVMF_PORT", 00:12:04.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:04.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:04.805 "hdgst": ${hdgst:-false}, 00:12:04.805 "ddgst": ${ddgst:-false} 00:12:04.805 }, 00:12:04.805 "method": "bdev_nvme_attach_controller" 00:12:04.805 } 00:12:04.805 EOF 00:12:04.805 )") 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:12:04.805 20:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:04.805 "params": { 00:12:04.805 "name": "Nvme1", 00:12:04.805 "trtype": "tcp", 00:12:04.805 "traddr": "10.0.0.2", 00:12:04.805 "adrfam": "ipv4", 00:12:04.805 "trsvcid": "4420", 00:12:04.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:04.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:04.805 "hdgst": false, 00:12:04.805 "ddgst": false 00:12:04.805 }, 00:12:04.805 "method": "bdev_nvme_attach_controller" 00:12:04.805 }' 00:12:04.805 [2024-12-05 20:31:57.820786] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:12:04.805 [2024-12-05 20:31:57.820825] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid251856 ] 00:12:04.805 [2024-12-05 20:31:57.893767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:04.805 [2024-12-05 20:31:57.934435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.805 [2024-12-05 20:31:57.934542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.805 [2024-12-05 20:31:57.934541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.805 I/O targets: 00:12:04.805 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:04.805 00:12:04.805 00:12:04.805 CUnit - A unit testing framework for C - Version 2.1-3 00:12:04.805 http://cunit.sourceforge.net/ 00:12:04.805 00:12:04.805 00:12:04.805 Suite: bdevio tests on: Nvme1n1 00:12:04.805 Test: blockdev write read block ...passed 00:12:04.805 Test: blockdev write zeroes read block ...passed 00:12:04.805 Test: blockdev write zeroes read no split ...passed 00:12:05.064 Test: blockdev write zeroes read split ...passed 00:12:05.064 Test: blockdev write zeroes read split partial ...passed 00:12:05.064 Test: blockdev reset ...[2024-12-05 20:31:58.288989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:05.064 [2024-12-05 20:31:58.289048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c6400 (9): Bad file descriptor 00:12:05.064 [2024-12-05 20:31:58.340181] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:05.064 passed 00:12:05.064 Test: blockdev write read 8 blocks ...passed 00:12:05.064 Test: blockdev write read size > 128k ...passed 00:12:05.064 Test: blockdev write read invalid size ...passed 00:12:05.064 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:05.064 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:05.064 Test: blockdev write read max offset ...passed 00:12:05.064 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:05.324 Test: blockdev writev readv 8 blocks ...passed 00:12:05.324 Test: blockdev writev readv 30 x 1block ...passed 00:12:05.324 Test: blockdev writev readv block ...passed 00:12:05.324 Test: blockdev writev readv size > 128k ...passed 00:12:05.324 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:05.324 Test: blockdev comparev and writev ...[2024-12-05 20:31:58.552781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:05.324 [2024-12-05 20:31:58.552808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:05.324 [2024-12-05 20:31:58.552820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:05.324 [2024-12-05 20:31:58.552828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:05.324 [2024-12-05 20:31:58.553049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:05.324 [2024-12-05 20:31:58.553063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:05.324 [2024-12-05 20:31:58.553074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:05.324 [2024-12-05 20:31:58.553081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:05.324 [2024-12-05 20:31:58.553289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:05.324 [2024-12-05 20:31:58.553298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:05.324 [2024-12-05 20:31:58.553308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:05.324 [2024-12-05 20:31:58.553314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:05.324 [2024-12-05 20:31:58.553538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:05.324 [2024-12-05 20:31:58.553546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:05.324 [2024-12-05 20:31:58.553556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:05.324 [2024-12-05 20:31:58.553562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:05.324 passed 00:12:05.324 Test: blockdev nvme passthru rw ...passed 00:12:05.324 Test: blockdev nvme passthru vendor specific ...[2024-12-05 20:31:58.636401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:05.324 [2024-12-05 20:31:58.636416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:05.324 [2024-12-05 20:31:58.636514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:05.324 [2024-12-05 20:31:58.636522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:05.324 [2024-12-05 20:31:58.636618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:05.324 [2024-12-05 20:31:58.636626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:05.324 [2024-12-05 20:31:58.636718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:05.324 [2024-12-05 20:31:58.636726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:05.324 passed 00:12:05.324 Test: blockdev nvme admin passthru ...passed 00:12:05.324 Test: blockdev copy ...passed 00:12:05.324 00:12:05.324 Run Summary: Type Total Ran Passed Failed Inactive 00:12:05.324 suites 1 1 n/a 0 0 00:12:05.324 tests 23 23 23 0 0 00:12:05.324 asserts 152 152 152 0 n/a 00:12:05.324 00:12:05.324 Elapsed time = 1.130 seconds 00:12:05.583 20:31:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:05.583 20:31:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.583 20:31:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:05.583 20:31:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.583 20:31:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:05.583 20:31:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:05.583 20:31:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:05.584 20:31:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:05.584 20:31:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:05.584 20:31:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:05.584 20:31:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:05.584 20:31:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:05.584 rmmod nvme_tcp 00:12:05.584 rmmod nvme_fabrics 00:12:05.584 rmmod nvme_keyring 00:12:05.584 20:31:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:05.584 20:31:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:05.584 20:31:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:05.584 20:31:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 251703 ']' 00:12:05.584 20:31:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 251703 00:12:05.584 20:31:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 251703 ']' 00:12:05.584 20:31:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 251703 00:12:05.584 20:31:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:12:05.584 20:31:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:05.584 20:31:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 251703 00:12:05.584 20:31:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:12:05.584 20:31:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:12:05.584 20:31:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 251703' 00:12:05.584 killing process with pid 251703 00:12:05.584 20:31:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 251703 00:12:05.584 20:31:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 251703 00:12:05.843 20:31:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:05.843 20:31:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:05.843 20:31:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:05.843 20:31:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:12:05.843 20:31:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:12:05.843 20:31:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:05.843 20:31:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:12:05.843 20:31:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:05.843 20:31:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:05.843 20:31:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.843 20:31:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.843 20:31:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:08.383 00:12:08.383 real 0m9.988s 00:12:08.383 user 0m10.049s 00:12:08.383 sys 0m4.918s 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:08.383 ************************************ 00:12:08.383 END TEST nvmf_bdevio 00:12:08.383 ************************************ 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:08.383 00:12:08.383 real 4m38.513s 00:12:08.383 user 10m53.363s 00:12:08.383 sys 1m38.944s 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:08.383 ************************************ 00:12:08.383 END TEST nvmf_target_core 00:12:08.383 ************************************ 00:12:08.383 20:32:01 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:08.383 20:32:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:08.383 20:32:01 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.383 20:32:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:08.383 ************************************ 00:12:08.383 START TEST nvmf_target_extra 00:12:08.383 ************************************ 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:08.383 * Looking for test storage... 00:12:08.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:08.383 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:08.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.384 --rc genhtml_branch_coverage=1 00:12:08.384 --rc genhtml_function_coverage=1 00:12:08.384 --rc genhtml_legend=1 00:12:08.384 --rc geninfo_all_blocks=1 00:12:08.384 --rc geninfo_unexecuted_blocks=1 00:12:08.384 00:12:08.384 ' 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:08.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.384 --rc genhtml_branch_coverage=1 00:12:08.384 --rc genhtml_function_coverage=1 00:12:08.384 --rc genhtml_legend=1 00:12:08.384 --rc geninfo_all_blocks=1 00:12:08.384 --rc geninfo_unexecuted_blocks=1 00:12:08.384 00:12:08.384 ' 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:08.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.384 --rc genhtml_branch_coverage=1 00:12:08.384 --rc genhtml_function_coverage=1 00:12:08.384 --rc genhtml_legend=1 00:12:08.384 --rc geninfo_all_blocks=1 00:12:08.384 --rc geninfo_unexecuted_blocks=1 00:12:08.384 00:12:08.384 ' 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:08.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.384 --rc genhtml_branch_coverage=1 00:12:08.384 --rc genhtml_function_coverage=1 00:12:08.384 --rc genhtml_legend=1 00:12:08.384 --rc geninfo_all_blocks=1 00:12:08.384 --rc geninfo_unexecuted_blocks=1 00:12:08.384 00:12:08.384 ' 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:08.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:08.384 ************************************ 00:12:08.384 START TEST nvmf_example 00:12:08.384 ************************************ 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:08.384 * Looking for test storage... 00:12:08.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:12:08.384 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:08.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.385 --rc genhtml_branch_coverage=1 00:12:08.385 --rc genhtml_function_coverage=1 00:12:08.385 --rc genhtml_legend=1 00:12:08.385 --rc geninfo_all_blocks=1 00:12:08.385 --rc geninfo_unexecuted_blocks=1 00:12:08.385 00:12:08.385 ' 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:08.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.385 --rc genhtml_branch_coverage=1 00:12:08.385 --rc genhtml_function_coverage=1 00:12:08.385 --rc genhtml_legend=1 00:12:08.385 --rc geninfo_all_blocks=1 00:12:08.385 --rc geninfo_unexecuted_blocks=1 00:12:08.385 00:12:08.385 ' 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:08.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.385 --rc genhtml_branch_coverage=1 00:12:08.385 --rc genhtml_function_coverage=1 00:12:08.385 --rc genhtml_legend=1 00:12:08.385 --rc geninfo_all_blocks=1 00:12:08.385 --rc geninfo_unexecuted_blocks=1 00:12:08.385 00:12:08.385 ' 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:08.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.385 --rc genhtml_branch_coverage=1 00:12:08.385 --rc genhtml_function_coverage=1 00:12:08.385 --rc genhtml_legend=1 00:12:08.385 --rc geninfo_all_blocks=1 00:12:08.385 --rc geninfo_unexecuted_blocks=1 00:12:08.385 00:12:08.385 ' 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:08.385 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:12:08.385 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:14.960 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:14.960 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:12:14.960 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:14.960 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:14.960 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:14.960 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:14.960 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:14.960 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:12:14.960 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:14.960 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:12:14.960 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:12:14.960 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:12:14.960 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:12:14.960 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:12:14.960 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:12:14.960 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:14.960 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:14.960 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:14.960 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:14.960 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:14.960 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:14.960 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:14.960 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:14.961 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:14.961 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:14.961 Found net devices under 0000:af:00.0: cvl_0_0 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:14.961 Found net devices under 0000:af:00.1: cvl_0_1 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:14.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:14.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.503 ms 00:12:14.961 00:12:14.961 --- 10.0.0.2 ping statistics --- 00:12:14.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.961 rtt min/avg/max/mdev = 0.503/0.503/0.503/0.000 ms 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:14.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:14.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:12:14.961 00:12:14.961 --- 10.0.0.1 ping statistics --- 00:12:14.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.961 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:12:14.961 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:14.962 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:12:14.962 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:14.962 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:14.962 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:14.962 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:14.962 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:14.962 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:14.962 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:14.962 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:14.962 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:14.962 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:14.962 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:14.962 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:14.962 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:14.962 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=255782 00:12:14.962 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:14.962 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:14.962 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 255782 00:12:14.962 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 255782 ']' 00:12:14.962 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.962 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:14.962 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.962 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:14.962 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:15.220 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:15.220 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:12:15.220 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:15.220 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:15.220 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:15.220 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:15.220 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.220 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:15.220 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.220 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:15.220 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.220 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:15.477 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.477 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:15.477 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:15.477 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.477 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:15.477 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.477 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:15.477 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:15.477 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.477 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:15.477 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.477 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:15.477 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.477 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:15.477 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.477 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:15.477 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:27.689 Initializing NVMe Controllers 00:12:27.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:27.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:27.689 Initialization complete. Launching workers. 00:12:27.689 ======================================================== 00:12:27.689 Latency(us) 00:12:27.689 Device Information : IOPS MiB/s Average min max 00:12:27.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19657.20 76.79 3256.61 632.62 16007.83 00:12:27.689 ======================================================== 00:12:27.689 Total : 19657.20 76.79 3256.61 632.62 16007.83 00:12:27.689 00:12:27.689 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:27.689 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:27.689 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:27.689 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:27.689 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:27.689 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:27.689 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:27.689 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:27.689 rmmod nvme_tcp 00:12:27.689 rmmod nvme_fabrics 00:12:27.689 rmmod nvme_keyring 00:12:27.689 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:27.689 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:27.689 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:27.689 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 255782 ']' 00:12:27.689 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 255782 00:12:27.689 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 255782 ']' 00:12:27.689 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 255782 00:12:27.689 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:12:27.689 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.689 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 255782 00:12:27.689 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:12:27.689 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:12:27.689 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 255782' 00:12:27.689 killing process with pid 255782 00:12:27.689 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 255782 00:12:27.689 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 255782 00:12:27.689 nvmf threads initialize successfully 00:12:27.689 bdev subsystem init successfully 00:12:27.690 created a nvmf target service 00:12:27.690 create targets's poll groups done 00:12:27.690 all subsystems of target started 00:12:27.690 nvmf target is running 00:12:27.690 all subsystems of target stopped 00:12:27.690 destroy targets's poll groups done 00:12:27.690 destroyed the nvmf target service 00:12:27.690 bdev subsystem finish successfully 00:12:27.690 nvmf threads destroy successfully 00:12:27.690 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:27.690 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:27.690 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:27.690 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:12:27.690 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:12:27.690 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:27.690 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:12:27.690 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:27.690 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:27.690 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.690 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.690 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.259 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:28.260 00:12:28.260 real 0m19.879s 00:12:28.260 user 0m46.390s 00:12:28.260 sys 0m6.048s 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:28.260 ************************************ 00:12:28.260 END TEST nvmf_example 00:12:28.260 ************************************ 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:28.260 ************************************ 00:12:28.260 START TEST nvmf_filesystem 00:12:28.260 ************************************ 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:28.260 * Looking for test storage... 00:12:28.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:28.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.260 --rc genhtml_branch_coverage=1 00:12:28.260 --rc genhtml_function_coverage=1 00:12:28.260 --rc genhtml_legend=1 00:12:28.260 --rc geninfo_all_blocks=1 00:12:28.260 --rc geninfo_unexecuted_blocks=1 00:12:28.260 00:12:28.260 ' 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:28.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.260 --rc genhtml_branch_coverage=1 00:12:28.260 --rc genhtml_function_coverage=1 00:12:28.260 --rc genhtml_legend=1 00:12:28.260 --rc geninfo_all_blocks=1 00:12:28.260 --rc geninfo_unexecuted_blocks=1 00:12:28.260 00:12:28.260 ' 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:28.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.260 --rc genhtml_branch_coverage=1 00:12:28.260 --rc genhtml_function_coverage=1 00:12:28.260 --rc genhtml_legend=1 00:12:28.260 --rc geninfo_all_blocks=1 00:12:28.260 --rc geninfo_unexecuted_blocks=1 00:12:28.260 00:12:28.260 ' 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:28.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.260 --rc genhtml_branch_coverage=1 00:12:28.260 --rc genhtml_function_coverage=1 00:12:28.260 --rc genhtml_legend=1 00:12:28.260 --rc geninfo_all_blocks=1 00:12:28.260 --rc geninfo_unexecuted_blocks=1 00:12:28.260 00:12:28.260 ' 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:28.260 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:28.261 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:28.525 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:28.525 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:28.525 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:28.525 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:28.525 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:28.525 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:28.525 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:28.525 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:28.525 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:28.525 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:28.525 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:28.525 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:28.525 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:28.525 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:28.525 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:28.525 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:28.525 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:28.525 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:28.525 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:28.525 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:28.525 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:28.525 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:28.525 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:28.526 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:28.526 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:28.526 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:28.526 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:28.526 #define SPDK_CONFIG_H 00:12:28.526 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:28.526 #define SPDK_CONFIG_APPS 1 00:12:28.526 #define SPDK_CONFIG_ARCH native 00:12:28.526 #undef SPDK_CONFIG_ASAN 00:12:28.526 #undef SPDK_CONFIG_AVAHI 00:12:28.526 #undef SPDK_CONFIG_CET 00:12:28.526 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:28.526 #define SPDK_CONFIG_COVERAGE 1 00:12:28.526 #define SPDK_CONFIG_CROSS_PREFIX 00:12:28.526 #undef SPDK_CONFIG_CRYPTO 00:12:28.526 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:28.526 #undef SPDK_CONFIG_CUSTOMOCF 00:12:28.526 #undef SPDK_CONFIG_DAOS 00:12:28.526 #define SPDK_CONFIG_DAOS_DIR 00:12:28.526 #define SPDK_CONFIG_DEBUG 1 00:12:28.526 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:28.526 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:28.526 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:28.526 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:28.526 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:28.526 #undef SPDK_CONFIG_DPDK_UADK 00:12:28.526 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:28.526 #define SPDK_CONFIG_EXAMPLES 1 00:12:28.526 #undef SPDK_CONFIG_FC 00:12:28.526 #define SPDK_CONFIG_FC_PATH 00:12:28.526 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:28.526 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:28.526 #define SPDK_CONFIG_FSDEV 1 00:12:28.526 #undef SPDK_CONFIG_FUSE 00:12:28.526 #undef SPDK_CONFIG_FUZZER 00:12:28.526 #define SPDK_CONFIG_FUZZER_LIB 00:12:28.526 #undef SPDK_CONFIG_GOLANG 00:12:28.526 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:28.526 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:28.526 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:28.526 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:28.526 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:28.526 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:28.526 #undef SPDK_CONFIG_HAVE_LZ4 00:12:28.526 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:28.526 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:28.526 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:28.526 #define SPDK_CONFIG_IDXD 1 00:12:28.526 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:28.526 #undef SPDK_CONFIG_IPSEC_MB 00:12:28.526 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:28.526 #define SPDK_CONFIG_ISAL 1 00:12:28.526 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:28.526 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:28.526 #define SPDK_CONFIG_LIBDIR 00:12:28.526 #undef SPDK_CONFIG_LTO 00:12:28.526 #define SPDK_CONFIG_MAX_LCORES 128 00:12:28.526 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:28.526 #define SPDK_CONFIG_NVME_CUSE 1 00:12:28.526 #undef SPDK_CONFIG_OCF 00:12:28.526 #define SPDK_CONFIG_OCF_PATH 00:12:28.526 #define SPDK_CONFIG_OPENSSL_PATH 00:12:28.526 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:28.526 #define SPDK_CONFIG_PGO_DIR 00:12:28.526 #undef SPDK_CONFIG_PGO_USE 00:12:28.526 #define SPDK_CONFIG_PREFIX /usr/local 00:12:28.526 #undef SPDK_CONFIG_RAID5F 00:12:28.526 #undef SPDK_CONFIG_RBD 00:12:28.526 #define SPDK_CONFIG_RDMA 1 00:12:28.526 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:28.526 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:28.526 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:28.526 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:28.526 #define SPDK_CONFIG_SHARED 1 00:12:28.526 #undef SPDK_CONFIG_SMA 00:12:28.526 #define SPDK_CONFIG_TESTS 1 00:12:28.526 #undef SPDK_CONFIG_TSAN 00:12:28.526 #define SPDK_CONFIG_UBLK 1 00:12:28.526 #define SPDK_CONFIG_UBSAN 1 00:12:28.526 #undef SPDK_CONFIG_UNIT_TESTS 00:12:28.526 #undef SPDK_CONFIG_URING 00:12:28.526 #define SPDK_CONFIG_URING_PATH 00:12:28.526 #undef SPDK_CONFIG_URING_ZNS 00:12:28.526 #undef SPDK_CONFIG_USDT 00:12:28.526 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:28.526 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:28.526 #define SPDK_CONFIG_VFIO_USER 1 00:12:28.526 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:28.526 #define SPDK_CONFIG_VHOST 1 00:12:28.526 #define SPDK_CONFIG_VIRTIO 1 00:12:28.526 #undef SPDK_CONFIG_VTUNE 00:12:28.526 #define SPDK_CONFIG_VTUNE_DIR 00:12:28.526 #define SPDK_CONFIG_WERROR 1 00:12:28.526 #define SPDK_CONFIG_WPDK_DIR 00:12:28.526 #undef SPDK_CONFIG_XNVME 00:12:28.526 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:28.526 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:28.526 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:28.526 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:28.526 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.526 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.526 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.526 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.526 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.526 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.526 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:28.526 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.526 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:28.526 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:28.526 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:28.526 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:28.526 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:28.526 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:28.526 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:28.527 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:28.528 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 258502 ]] 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 258502 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.gBObdT 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.gBObdT/tests/target /tmp/spdk.gBObdT 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:12:28.529 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=88792690688 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=94489763840 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5697073152 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47234850816 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47244881920 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=18874859520 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=18897952768 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23093248 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47244570624 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47244881920 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=311296 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=9448964096 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=9448976384 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:28.530 * Looking for test storage... 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=88792690688 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=7911665664 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:28.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:28.530 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:28.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.531 --rc genhtml_branch_coverage=1 00:12:28.531 --rc genhtml_function_coverage=1 00:12:28.531 --rc genhtml_legend=1 00:12:28.531 --rc geninfo_all_blocks=1 00:12:28.531 --rc geninfo_unexecuted_blocks=1 00:12:28.531 00:12:28.531 ' 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:28.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.531 --rc genhtml_branch_coverage=1 00:12:28.531 --rc genhtml_function_coverage=1 00:12:28.531 --rc genhtml_legend=1 00:12:28.531 --rc geninfo_all_blocks=1 00:12:28.531 --rc geninfo_unexecuted_blocks=1 00:12:28.531 00:12:28.531 ' 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:28.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.531 --rc genhtml_branch_coverage=1 00:12:28.531 --rc genhtml_function_coverage=1 00:12:28.531 --rc genhtml_legend=1 00:12:28.531 --rc geninfo_all_blocks=1 00:12:28.531 --rc geninfo_unexecuted_blocks=1 00:12:28.531 00:12:28.531 ' 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:28.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.531 --rc genhtml_branch_coverage=1 00:12:28.531 --rc genhtml_function_coverage=1 00:12:28.531 --rc genhtml_legend=1 00:12:28.531 --rc geninfo_all_blocks=1 00:12:28.531 --rc geninfo_unexecuted_blocks=1 00:12:28.531 00:12:28.531 ' 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.531 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:28.532 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.532 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:28.532 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:28.532 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:28.532 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:28.532 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:28.532 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:28.532 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:28.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:28.532 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:28.532 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:28.532 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:28.532 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:28.532 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:28.532 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:28.532 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:28.532 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:28.532 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:28.532 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:28.532 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:28.532 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.532 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:28.532 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.532 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:28.532 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:28.532 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:28.532 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:35.105 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:35.105 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:35.105 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:35.105 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:35.105 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:35.105 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:35.105 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:35.105 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:35.105 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:35.105 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:35.105 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:35.105 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:35.105 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:35.105 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:35.105 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:35.105 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:35.105 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:35.105 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:35.105 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:35.105 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:35.105 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:35.105 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:35.105 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:35.105 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:35.105 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:35.105 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:35.105 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:35.105 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:35.105 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:35.106 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:35.106 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:35.106 Found net devices under 0000:af:00.0: cvl_0_0 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:35.106 Found net devices under 0000:af:00.1: cvl_0_1 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:35.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.481 ms 00:12:35.106 00:12:35.106 --- 10.0.0.2 ping statistics --- 00:12:35.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.106 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:35.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:12:35.106 00:12:35.106 --- 10.0.0.1 ping statistics --- 00:12:35.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.106 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.106 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:12:35.107 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:35.107 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.107 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:35.107 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:35.107 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.107 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:35.107 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:35.107 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:35.107 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:35.107 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:35.107 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:35.107 ************************************ 00:12:35.107 START TEST nvmf_filesystem_no_in_capsule 00:12:35.107 ************************************ 00:12:35.107 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:12:35.107 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:35.107 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:35.107 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:35.107 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:35.107 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:35.107 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=261684 00:12:35.107 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 261684 00:12:35.107 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:35.107 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 261684 ']' 00:12:35.107 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.107 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:35.107 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.107 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:35.107 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:35.107 [2024-12-05 20:32:28.077391] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:12:35.107 [2024-12-05 20:32:28.077435] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.107 [2024-12-05 20:32:28.155110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:35.107 [2024-12-05 20:32:28.195784] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.107 [2024-12-05 20:32:28.195814] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.107 [2024-12-05 20:32:28.195821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:35.107 [2024-12-05 20:32:28.195826] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:35.107 [2024-12-05 20:32:28.195831] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.107 [2024-12-05 20:32:28.197391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.107 [2024-12-05 20:32:28.197503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.107 [2024-12-05 20:32:28.197592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.107 [2024-12-05 20:32:28.197594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:35.677 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:35.677 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:35.677 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:35.677 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:35.677 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:35.677 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.677 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:35.677 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:35.677 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.677 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:35.677 [2024-12-05 20:32:28.924584] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:35.677 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.677 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:35.677 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.677 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:35.677 Malloc1 00:12:35.677 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.677 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:35.677 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.677 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:35.677 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.677 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:35.677 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.677 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:35.677 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.677 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.677 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.677 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:35.677 [2024-12-05 20:32:29.089125] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.677 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.677 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:35.677 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:35.677 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:35.677 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:35.677 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:35.677 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:35.677 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.677 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:35.677 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.938 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:35.938 { 00:12:35.938 "name": "Malloc1", 00:12:35.938 "aliases": [ 00:12:35.938 "f53ea51c-8866-4f59-81ff-76b104b590b7" 00:12:35.938 ], 00:12:35.938 "product_name": "Malloc disk", 00:12:35.938 "block_size": 512, 00:12:35.938 "num_blocks": 1048576, 00:12:35.938 "uuid": "f53ea51c-8866-4f59-81ff-76b104b590b7", 00:12:35.938 "assigned_rate_limits": { 00:12:35.938 "rw_ios_per_sec": 0, 00:12:35.938 "rw_mbytes_per_sec": 0, 00:12:35.938 "r_mbytes_per_sec": 0, 00:12:35.938 "w_mbytes_per_sec": 0 00:12:35.938 }, 00:12:35.938 "claimed": true, 00:12:35.938 "claim_type": "exclusive_write", 00:12:35.938 "zoned": false, 00:12:35.938 "supported_io_types": { 00:12:35.938 "read": true, 00:12:35.938 "write": true, 00:12:35.938 "unmap": true, 00:12:35.938 "flush": true, 00:12:35.938 "reset": true, 00:12:35.938 "nvme_admin": false, 00:12:35.938 "nvme_io": false, 00:12:35.938 "nvme_io_md": false, 00:12:35.938 "write_zeroes": true, 00:12:35.938 "zcopy": true, 00:12:35.938 "get_zone_info": false, 00:12:35.938 "zone_management": false, 00:12:35.938 "zone_append": false, 00:12:35.938 "compare": false, 00:12:35.938 "compare_and_write": false, 00:12:35.938 "abort": true, 00:12:35.938 "seek_hole": false, 00:12:35.938 "seek_data": false, 00:12:35.938 "copy": true, 00:12:35.938 "nvme_iov_md": false 00:12:35.938 }, 00:12:35.938 "memory_domains": [ 00:12:35.938 { 00:12:35.938 "dma_device_id": "system", 00:12:35.938 "dma_device_type": 1 00:12:35.938 }, 00:12:35.938 { 00:12:35.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.938 "dma_device_type": 2 00:12:35.938 } 00:12:35.938 ], 00:12:35.938 "driver_specific": {} 00:12:35.938 } 00:12:35.938 ]' 00:12:35.938 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:35.938 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:35.938 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:35.938 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:35.938 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:35.938 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:35.938 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:35.938 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:37.318 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:37.318 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:37.318 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:37.318 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:37.318 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:39.226 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:39.226 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:39.226 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:39.227 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:39.227 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:39.227 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:39.227 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:39.227 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:39.227 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:39.227 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:39.227 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:39.227 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:39.227 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:39.227 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:39.227 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:39.227 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:39.227 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:39.486 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:39.486 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:40.865 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:40.865 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:40.865 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:40.865 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:40.865 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:40.865 ************************************ 00:12:40.865 START TEST filesystem_ext4 00:12:40.865 ************************************ 00:12:40.865 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:40.865 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:40.865 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:40.865 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:40.865 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:40.865 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:40.865 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:40.865 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:40.865 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:40.865 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:40.865 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:40.865 mke2fs 1.47.0 (5-Feb-2023) 00:12:40.865 Discarding device blocks: 0/522240 done 00:12:40.865 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:40.865 Filesystem UUID: b8ad8406-59ce-45d1-9c8a-1b181f685794 00:12:40.865 Superblock backups stored on blocks: 00:12:40.865 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:40.865 00:12:40.865 Allocating group tables: 0/64 done 00:12:40.865 Writing inode tables: 0/64 done 00:12:40.865 Creating journal (8192 blocks): done 00:12:43.076 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:12:43.076 00:12:43.076 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:43.076 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:49.647 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:49.647 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:49.647 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:49.647 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:49.647 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:49.647 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:49.647 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 261684 00:12:49.647 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:49.647 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:49.647 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:49.647 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:49.647 00:12:49.647 real 0m8.346s 00:12:49.647 user 0m0.032s 00:12:49.647 sys 0m0.102s 00:12:49.647 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:49.647 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:49.647 ************************************ 00:12:49.647 END TEST filesystem_ext4 00:12:49.647 ************************************ 00:12:49.647 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:49.647 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:49.647 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.647 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:49.647 ************************************ 00:12:49.647 START TEST filesystem_btrfs 00:12:49.647 ************************************ 00:12:49.647 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:49.647 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:49.647 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:49.647 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:49.647 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:49.647 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:49.648 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:49.648 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:49.648 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:49.648 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:49.648 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:49.648 btrfs-progs v6.8.1 00:12:49.648 See https://btrfs.readthedocs.io for more information. 00:12:49.648 00:12:49.648 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:49.648 NOTE: several default settings have changed in version 5.15, please make sure 00:12:49.648 this does not affect your deployments: 00:12:49.648 - DUP for metadata (-m dup) 00:12:49.648 - enabled no-holes (-O no-holes) 00:12:49.648 - enabled free-space-tree (-R free-space-tree) 00:12:49.648 00:12:49.648 Label: (null) 00:12:49.648 UUID: 5a13d9de-22fe-4153-b050-d375aea2f731 00:12:49.648 Node size: 16384 00:12:49.648 Sector size: 4096 (CPU page size: 4096) 00:12:49.648 Filesystem size: 510.00MiB 00:12:49.648 Block group profiles: 00:12:49.648 Data: single 8.00MiB 00:12:49.648 Metadata: DUP 32.00MiB 00:12:49.648 System: DUP 8.00MiB 00:12:49.648 SSD detected: yes 00:12:49.648 Zoned device: no 00:12:49.648 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:49.648 Checksum: crc32c 00:12:49.648 Number of devices: 1 00:12:49.648 Devices: 00:12:49.648 ID SIZE PATH 00:12:49.648 1 510.00MiB /dev/nvme0n1p1 00:12:49.648 00:12:49.648 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:49.648 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:49.907 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:49.907 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:49.907 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:49.907 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:49.907 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:49.907 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:49.907 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 261684 00:12:49.907 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:49.907 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:49.907 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:49.907 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:49.907 00:12:49.907 real 0m0.871s 00:12:49.907 user 0m0.026s 00:12:49.907 sys 0m0.158s 00:12:49.907 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:49.907 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:49.907 ************************************ 00:12:49.907 END TEST filesystem_btrfs 00:12:49.907 ************************************ 00:12:49.907 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:49.907 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:49.907 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.907 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:49.907 ************************************ 00:12:49.907 START TEST filesystem_xfs 00:12:49.907 ************************************ 00:12:49.907 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:49.907 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:49.907 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:49.907 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:49.907 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:49.907 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:49.907 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:49.907 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:49.907 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:49.907 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:49.907 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:50.165 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:50.165 = sectsz=512 attr=2, projid32bit=1 00:12:50.165 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:50.165 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:50.165 data = bsize=4096 blocks=130560, imaxpct=25 00:12:50.165 = sunit=0 swidth=0 blks 00:12:50.165 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:50.165 log =internal log bsize=4096 blocks=16384, version=2 00:12:50.165 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:50.165 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:51.102 Discarding blocks...Done. 00:12:51.102 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:51.102 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 261684 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:54.411 00:12:54.411 real 0m4.119s 00:12:54.411 user 0m0.018s 00:12:54.411 sys 0m0.124s 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:54.411 ************************************ 00:12:54.411 END TEST filesystem_xfs 00:12:54.411 ************************************ 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:54.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 261684 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 261684 ']' 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 261684 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 261684 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 261684' 00:12:54.411 killing process with pid 261684 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 261684 00:12:54.411 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 261684 00:12:54.673 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:54.673 00:12:54.673 real 0m20.045s 00:12:54.673 user 1m19.085s 00:12:54.673 sys 0m1.567s 00:12:54.673 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:54.673 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:54.673 ************************************ 00:12:54.673 END TEST nvmf_filesystem_no_in_capsule 00:12:54.673 ************************************ 00:12:54.673 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:54.673 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:54.673 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:54.673 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:54.933 ************************************ 00:12:54.934 START TEST nvmf_filesystem_in_capsule 00:12:54.934 ************************************ 00:12:54.934 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:54.934 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:54.934 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:54.934 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:54.934 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:54.934 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:54.934 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=265589 00:12:54.934 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 265589 00:12:54.934 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:54.934 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 265589 ']' 00:12:54.934 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.934 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:54.934 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.934 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:54.934 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:54.934 [2024-12-05 20:32:48.192981] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:12:54.934 [2024-12-05 20:32:48.193021] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.934 [2024-12-05 20:32:48.271155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:54.934 [2024-12-05 20:32:48.311784] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.934 [2024-12-05 20:32:48.311823] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.934 [2024-12-05 20:32:48.311830] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.934 [2024-12-05 20:32:48.311836] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.934 [2024-12-05 20:32:48.311841] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.934 [2024-12-05 20:32:48.313450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.934 [2024-12-05 20:32:48.313566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.934 [2024-12-05 20:32:48.313677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.934 [2024-12-05 20:32:48.313677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:55.873 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:55.873 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:55.873 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:55.873 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:55.873 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:55.873 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:55.873 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:55.873 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:55.873 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.873 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:55.873 [2024-12-05 20:32:49.052602] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:55.873 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.873 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:55.873 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.873 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:55.873 Malloc1 00:12:55.873 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.873 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:55.873 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.873 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:55.873 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.873 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:55.873 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.873 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:55.873 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.873 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.873 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.873 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:55.873 [2024-12-05 20:32:49.208220] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.873 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.873 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:55.873 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:55.873 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:55.874 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:55.874 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:55.874 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:55.874 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.874 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:55.874 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.874 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:55.874 { 00:12:55.874 "name": "Malloc1", 00:12:55.874 "aliases": [ 00:12:55.874 "013cb508-aa72-4bfc-8f1f-2035f3f99876" 00:12:55.874 ], 00:12:55.874 "product_name": "Malloc disk", 00:12:55.874 "block_size": 512, 00:12:55.874 "num_blocks": 1048576, 00:12:55.874 "uuid": "013cb508-aa72-4bfc-8f1f-2035f3f99876", 00:12:55.874 "assigned_rate_limits": { 00:12:55.874 "rw_ios_per_sec": 0, 00:12:55.874 "rw_mbytes_per_sec": 0, 00:12:55.874 "r_mbytes_per_sec": 0, 00:12:55.874 "w_mbytes_per_sec": 0 00:12:55.874 }, 00:12:55.874 "claimed": true, 00:12:55.874 "claim_type": "exclusive_write", 00:12:55.874 "zoned": false, 00:12:55.874 "supported_io_types": { 00:12:55.874 "read": true, 00:12:55.874 "write": true, 00:12:55.874 "unmap": true, 00:12:55.874 "flush": true, 00:12:55.874 "reset": true, 00:12:55.874 "nvme_admin": false, 00:12:55.874 "nvme_io": false, 00:12:55.874 "nvme_io_md": false, 00:12:55.874 "write_zeroes": true, 00:12:55.874 "zcopy": true, 00:12:55.874 "get_zone_info": false, 00:12:55.874 "zone_management": false, 00:12:55.874 "zone_append": false, 00:12:55.874 "compare": false, 00:12:55.874 "compare_and_write": false, 00:12:55.874 "abort": true, 00:12:55.874 "seek_hole": false, 00:12:55.874 "seek_data": false, 00:12:55.874 "copy": true, 00:12:55.874 "nvme_iov_md": false 00:12:55.874 }, 00:12:55.874 "memory_domains": [ 00:12:55.874 { 00:12:55.874 "dma_device_id": "system", 00:12:55.874 "dma_device_type": 1 00:12:55.874 }, 00:12:55.874 { 00:12:55.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.874 "dma_device_type": 2 00:12:55.874 } 00:12:55.874 ], 00:12:55.874 "driver_specific": {} 00:12:55.874 } 00:12:55.874 ]' 00:12:55.874 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:55.874 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:55.874 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:56.133 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:56.133 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:56.133 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:56.133 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:56.133 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:57.514 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:57.514 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:57.514 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:57.514 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:57.514 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:59.417 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:59.417 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:59.417 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:59.417 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:59.417 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:59.417 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:59.417 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:59.417 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:59.417 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:59.417 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:59.417 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:59.417 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:59.417 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:59.417 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:59.417 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:59.417 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:59.417 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:59.678 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:59.938 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:00.876 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:00.876 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:00.876 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:00.876 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:00.876 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:00.876 ************************************ 00:13:00.876 START TEST filesystem_in_capsule_ext4 00:13:00.876 ************************************ 00:13:00.876 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:00.876 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:00.876 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:00.876 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:00.876 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:13:00.876 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:00.876 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:13:00.876 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:13:00.876 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:13:00.876 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:13:00.876 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:00.876 mke2fs 1.47.0 (5-Feb-2023) 00:13:01.136 Discarding device blocks: 0/522240 done 00:13:01.136 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:01.136 Filesystem UUID: 1acde4b8-4c39-4687-8288-c367e32f8fdf 00:13:01.136 Superblock backups stored on blocks: 00:13:01.136 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:01.136 00:13:01.136 Allocating group tables: 0/64 done 00:13:01.136 Writing inode tables: 0/64 done 00:13:01.136 Creating journal (8192 blocks): done 00:13:01.136 Writing superblocks and filesystem accounting information: 0/64 done 00:13:01.136 00:13:01.136 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:13:01.136 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:07.709 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:07.709 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:07.709 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:07.709 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:07.709 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:07.709 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:07.709 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 265589 00:13:07.709 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:07.709 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:07.709 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:07.709 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:07.709 00:13:07.709 real 0m6.056s 00:13:07.709 user 0m0.023s 00:13:07.709 sys 0m0.073s 00:13:07.709 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.709 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:07.709 ************************************ 00:13:07.709 END TEST filesystem_in_capsule_ext4 00:13:07.709 ************************************ 00:13:07.709 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:07.709 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:07.709 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.709 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:07.709 ************************************ 00:13:07.709 START TEST filesystem_in_capsule_btrfs 00:13:07.709 ************************************ 00:13:07.709 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:07.709 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:07.709 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:07.709 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:07.709 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:13:07.709 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:07.709 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:13:07.709 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:13:07.709 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:13:07.709 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:13:07.709 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:07.709 btrfs-progs v6.8.1 00:13:07.709 See https://btrfs.readthedocs.io for more information. 00:13:07.709 00:13:07.709 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:07.709 NOTE: several default settings have changed in version 5.15, please make sure 00:13:07.709 this does not affect your deployments: 00:13:07.709 - DUP for metadata (-m dup) 00:13:07.709 - enabled no-holes (-O no-holes) 00:13:07.709 - enabled free-space-tree (-R free-space-tree) 00:13:07.709 00:13:07.709 Label: (null) 00:13:07.709 UUID: 287e0936-429f-4dc0-b4e0-8e8958315002 00:13:07.709 Node size: 16384 00:13:07.709 Sector size: 4096 (CPU page size: 4096) 00:13:07.709 Filesystem size: 510.00MiB 00:13:07.709 Block group profiles: 00:13:07.709 Data: single 8.00MiB 00:13:07.709 Metadata: DUP 32.00MiB 00:13:07.709 System: DUP 8.00MiB 00:13:07.709 SSD detected: yes 00:13:07.709 Zoned device: no 00:13:07.709 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:07.709 Checksum: crc32c 00:13:07.709 Number of devices: 1 00:13:07.709 Devices: 00:13:07.709 ID SIZE PATH 00:13:07.709 1 510.00MiB /dev/nvme0n1p1 00:13:07.709 00:13:07.709 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:13:07.709 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:07.969 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:07.969 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:07.969 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:07.969 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:07.969 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:07.969 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:07.969 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 265589 00:13:07.969 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:07.969 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:07.969 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:07.970 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:07.970 00:13:07.970 real 0m0.867s 00:13:07.970 user 0m0.030s 00:13:07.970 sys 0m0.106s 00:13:07.970 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.970 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:07.970 ************************************ 00:13:07.970 END TEST filesystem_in_capsule_btrfs 00:13:07.970 ************************************ 00:13:07.970 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:07.970 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:07.970 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.970 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:07.970 ************************************ 00:13:07.970 START TEST filesystem_in_capsule_xfs 00:13:07.970 ************************************ 00:13:07.970 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:13:07.970 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:07.970 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:07.970 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:07.970 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:13:07.970 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:07.970 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:13:07.970 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:13:07.970 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:13:07.970 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:13:07.970 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:08.229 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:08.229 = sectsz=512 attr=2, projid32bit=1 00:13:08.229 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:08.229 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:08.229 data = bsize=4096 blocks=130560, imaxpct=25 00:13:08.229 = sunit=0 swidth=0 blks 00:13:08.229 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:08.229 log =internal log bsize=4096 blocks=16384, version=2 00:13:08.229 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:08.229 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:09.168 Discarding blocks...Done. 00:13:09.168 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:13:09.168 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:11.074 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:11.074 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:11.074 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:11.074 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:11.074 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:11.074 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:11.074 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 265589 00:13:11.074 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:11.074 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:11.074 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:11.074 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:11.074 00:13:11.074 real 0m2.926s 00:13:11.074 user 0m0.027s 00:13:11.074 sys 0m0.068s 00:13:11.074 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.075 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:11.075 ************************************ 00:13:11.075 END TEST filesystem_in_capsule_xfs 00:13:11.075 ************************************ 00:13:11.075 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:11.075 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:11.075 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:11.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.075 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:11.075 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:13:11.075 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:11.075 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.075 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:11.075 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.075 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:13:11.075 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.075 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.075 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:11.075 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.075 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:11.075 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 265589 00:13:11.075 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 265589 ']' 00:13:11.075 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 265589 00:13:11.075 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:13:11.075 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:11.075 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 265589 00:13:11.334 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:11.334 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:11.334 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 265589' 00:13:11.334 killing process with pid 265589 00:13:11.334 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 265589 00:13:11.334 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 265589 00:13:11.594 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:11.594 00:13:11.594 real 0m16.701s 00:13:11.594 user 1m5.832s 00:13:11.594 sys 0m1.417s 00:13:11.594 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.594 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:11.594 ************************************ 00:13:11.595 END TEST nvmf_filesystem_in_capsule 00:13:11.595 ************************************ 00:13:11.595 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:11.595 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:11.595 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:13:11.595 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:11.595 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:13:11.595 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:11.595 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:11.595 rmmod nvme_tcp 00:13:11.595 rmmod nvme_fabrics 00:13:11.595 rmmod nvme_keyring 00:13:11.595 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:11.595 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:13:11.595 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:13:11.595 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:11.595 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:11.595 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:11.595 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:11.595 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:13:11.595 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:13:11.595 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:11.595 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:13:11.595 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:11.595 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:11.595 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.595 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.595 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:14.134 00:13:14.134 real 0m45.518s 00:13:14.134 user 2m27.039s 00:13:14.134 sys 0m7.637s 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:14.134 ************************************ 00:13:14.134 END TEST nvmf_filesystem 00:13:14.134 ************************************ 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:14.134 ************************************ 00:13:14.134 START TEST nvmf_target_discovery 00:13:14.134 ************************************ 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:14.134 * Looking for test storage... 00:13:14.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:14.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.134 --rc genhtml_branch_coverage=1 00:13:14.134 --rc genhtml_function_coverage=1 00:13:14.134 --rc genhtml_legend=1 00:13:14.134 --rc geninfo_all_blocks=1 00:13:14.134 --rc geninfo_unexecuted_blocks=1 00:13:14.134 00:13:14.134 ' 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:14.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.134 --rc genhtml_branch_coverage=1 00:13:14.134 --rc genhtml_function_coverage=1 00:13:14.134 --rc genhtml_legend=1 00:13:14.134 --rc geninfo_all_blocks=1 00:13:14.134 --rc geninfo_unexecuted_blocks=1 00:13:14.134 00:13:14.134 ' 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:14.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.134 --rc genhtml_branch_coverage=1 00:13:14.134 --rc genhtml_function_coverage=1 00:13:14.134 --rc genhtml_legend=1 00:13:14.134 --rc geninfo_all_blocks=1 00:13:14.134 --rc geninfo_unexecuted_blocks=1 00:13:14.134 00:13:14.134 ' 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:14.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.134 --rc genhtml_branch_coverage=1 00:13:14.134 --rc genhtml_function_coverage=1 00:13:14.134 --rc genhtml_legend=1 00:13:14.134 --rc geninfo_all_blocks=1 00:13:14.134 --rc geninfo_unexecuted_blocks=1 00:13:14.134 00:13:14.134 ' 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.134 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:14.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:13:14.135 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:20.717 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:20.717 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:20.718 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:20.718 Found net devices under 0000:af:00.0: cvl_0_0 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:20.718 Found net devices under 0000:af:00.1: cvl_0_1 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:20.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:20.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:13:20.718 00:13:20.718 --- 10.0.0.2 ping statistics --- 00:13:20.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.718 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:20.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:20.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:13:20.718 00:13:20.718 --- 10.0.0.1 ping statistics --- 00:13:20.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.718 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=272947 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 272947 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 272947 ']' 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:20.718 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.718 [2024-12-05 20:33:13.397023] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:13:20.718 [2024-12-05 20:33:13.397067] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.718 [2024-12-05 20:33:13.474047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:20.718 [2024-12-05 20:33:13.511802] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.718 [2024-12-05 20:33:13.511837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.718 [2024-12-05 20:33:13.511843] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.718 [2024-12-05 20:33:13.511849] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.718 [2024-12-05 20:33:13.511853] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.718 [2024-12-05 20:33:13.513449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.719 [2024-12-05 20:33:13.513560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.719 [2024-12-05 20:33:13.513670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.719 [2024-12-05 20:33:13.513672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:20.978 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:20.978 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:13:20.978 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:20.978 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:20.978 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.978 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.978 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:20.978 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.978 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.978 [2024-12-05 20:33:14.252233] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:20.978 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.978 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:20.978 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:20.978 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:20.978 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.978 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.978 Null1 00:13:20.978 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.978 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:20.978 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.978 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.978 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.978 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:20.978 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.979 [2024-12-05 20:33:14.328208] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.979 Null2 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.979 Null3 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.979 Null4 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.979 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:21.239 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.239 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:21.239 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.239 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:21.239 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.239 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:21.239 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.239 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:21.239 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.239 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:21.239 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.239 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:21.239 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.239 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:21.239 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.239 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:21.239 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.239 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:21.239 00:13:21.239 Discovery Log Number of Records 6, Generation counter 6 00:13:21.239 =====Discovery Log Entry 0====== 00:13:21.239 trtype: tcp 00:13:21.239 adrfam: ipv4 00:13:21.239 subtype: current discovery subsystem 00:13:21.239 treq: not required 00:13:21.239 portid: 0 00:13:21.239 trsvcid: 4420 00:13:21.239 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:21.239 traddr: 10.0.0.2 00:13:21.239 eflags: explicit discovery connections, duplicate discovery information 00:13:21.239 sectype: none 00:13:21.239 =====Discovery Log Entry 1====== 00:13:21.239 trtype: tcp 00:13:21.239 adrfam: ipv4 00:13:21.239 subtype: nvme subsystem 00:13:21.239 treq: not required 00:13:21.239 portid: 0 00:13:21.239 trsvcid: 4420 00:13:21.239 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:21.239 traddr: 10.0.0.2 00:13:21.239 eflags: none 00:13:21.239 sectype: none 00:13:21.239 =====Discovery Log Entry 2====== 00:13:21.239 trtype: tcp 00:13:21.239 adrfam: ipv4 00:13:21.239 subtype: nvme subsystem 00:13:21.239 treq: not required 00:13:21.239 portid: 0 00:13:21.239 trsvcid: 4420 00:13:21.239 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:21.239 traddr: 10.0.0.2 00:13:21.239 eflags: none 00:13:21.239 sectype: none 00:13:21.239 =====Discovery Log Entry 3====== 00:13:21.239 trtype: tcp 00:13:21.239 adrfam: ipv4 00:13:21.239 subtype: nvme subsystem 00:13:21.239 treq: not required 00:13:21.239 portid: 0 00:13:21.239 trsvcid: 4420 00:13:21.239 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:21.239 traddr: 10.0.0.2 00:13:21.239 eflags: none 00:13:21.239 sectype: none 00:13:21.239 =====Discovery Log Entry 4====== 00:13:21.239 trtype: tcp 00:13:21.239 adrfam: ipv4 00:13:21.239 subtype: nvme subsystem 00:13:21.239 treq: not required 00:13:21.239 portid: 0 00:13:21.239 trsvcid: 4420 00:13:21.239 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:21.239 traddr: 10.0.0.2 00:13:21.239 eflags: none 00:13:21.239 sectype: none 00:13:21.239 =====Discovery Log Entry 5====== 00:13:21.239 trtype: tcp 00:13:21.239 adrfam: ipv4 00:13:21.239 subtype: discovery subsystem referral 00:13:21.239 treq: not required 00:13:21.239 portid: 0 00:13:21.239 trsvcid: 4430 00:13:21.239 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:21.239 traddr: 10.0.0.2 00:13:21.239 eflags: none 00:13:21.239 sectype: none 00:13:21.239 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:21.239 Perform nvmf subsystem discovery via RPC 00:13:21.239 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:21.239 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.239 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:21.239 [ 00:13:21.239 { 00:13:21.239 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:21.239 "subtype": "Discovery", 00:13:21.239 "listen_addresses": [ 00:13:21.239 { 00:13:21.239 "trtype": "TCP", 00:13:21.239 "adrfam": "IPv4", 00:13:21.239 "traddr": "10.0.0.2", 00:13:21.239 "trsvcid": "4420" 00:13:21.239 } 00:13:21.239 ], 00:13:21.239 "allow_any_host": true, 00:13:21.239 "hosts": [] 00:13:21.239 }, 00:13:21.239 { 00:13:21.239 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:21.239 "subtype": "NVMe", 00:13:21.239 "listen_addresses": [ 00:13:21.239 { 00:13:21.239 "trtype": "TCP", 00:13:21.239 "adrfam": "IPv4", 00:13:21.239 "traddr": "10.0.0.2", 00:13:21.239 "trsvcid": "4420" 00:13:21.239 } 00:13:21.239 ], 00:13:21.239 "allow_any_host": true, 00:13:21.239 "hosts": [], 00:13:21.239 "serial_number": "SPDK00000000000001", 00:13:21.239 "model_number": "SPDK bdev Controller", 00:13:21.239 "max_namespaces": 32, 00:13:21.239 "min_cntlid": 1, 00:13:21.239 "max_cntlid": 65519, 00:13:21.239 "namespaces": [ 00:13:21.239 { 00:13:21.239 "nsid": 1, 00:13:21.239 "bdev_name": "Null1", 00:13:21.239 "name": "Null1", 00:13:21.239 "nguid": "09825F6C008E4A1A97DB54498FD57DE7", 00:13:21.239 "uuid": "09825f6c-008e-4a1a-97db-54498fd57de7" 00:13:21.239 } 00:13:21.239 ] 00:13:21.239 }, 00:13:21.239 { 00:13:21.239 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:21.239 "subtype": "NVMe", 00:13:21.239 "listen_addresses": [ 00:13:21.239 { 00:13:21.239 "trtype": "TCP", 00:13:21.239 "adrfam": "IPv4", 00:13:21.239 "traddr": "10.0.0.2", 00:13:21.239 "trsvcid": "4420" 00:13:21.239 } 00:13:21.239 ], 00:13:21.240 "allow_any_host": true, 00:13:21.240 "hosts": [], 00:13:21.240 "serial_number": "SPDK00000000000002", 00:13:21.240 "model_number": "SPDK bdev Controller", 00:13:21.240 "max_namespaces": 32, 00:13:21.240 "min_cntlid": 1, 00:13:21.240 "max_cntlid": 65519, 00:13:21.240 "namespaces": [ 00:13:21.240 { 00:13:21.240 "nsid": 1, 00:13:21.240 "bdev_name": "Null2", 00:13:21.240 "name": "Null2", 00:13:21.240 "nguid": "8A98844C6FFD48B6ADBEC08918D988C8", 00:13:21.240 "uuid": "8a98844c-6ffd-48b6-adbe-c08918d988c8" 00:13:21.240 } 00:13:21.240 ] 00:13:21.240 }, 00:13:21.240 { 00:13:21.240 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:21.240 "subtype": "NVMe", 00:13:21.240 "listen_addresses": [ 00:13:21.240 { 00:13:21.240 "trtype": "TCP", 00:13:21.240 "adrfam": "IPv4", 00:13:21.240 "traddr": "10.0.0.2", 00:13:21.240 "trsvcid": "4420" 00:13:21.240 } 00:13:21.240 ], 00:13:21.240 "allow_any_host": true, 00:13:21.240 "hosts": [], 00:13:21.240 "serial_number": "SPDK00000000000003", 00:13:21.240 "model_number": "SPDK bdev Controller", 00:13:21.240 "max_namespaces": 32, 00:13:21.240 "min_cntlid": 1, 00:13:21.240 "max_cntlid": 65519, 00:13:21.240 "namespaces": [ 00:13:21.240 { 00:13:21.240 "nsid": 1, 00:13:21.240 "bdev_name": "Null3", 00:13:21.240 "name": "Null3", 00:13:21.240 "nguid": "2EED8748AC9045428797161E529A093C", 00:13:21.240 "uuid": "2eed8748-ac90-4542-8797-161e529a093c" 00:13:21.240 } 00:13:21.240 ] 00:13:21.240 }, 00:13:21.240 { 00:13:21.240 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:21.240 "subtype": "NVMe", 00:13:21.240 "listen_addresses": [ 00:13:21.240 { 00:13:21.240 "trtype": "TCP", 00:13:21.240 "adrfam": "IPv4", 00:13:21.240 "traddr": "10.0.0.2", 00:13:21.240 "trsvcid": "4420" 00:13:21.240 } 00:13:21.240 ], 00:13:21.240 "allow_any_host": true, 00:13:21.240 "hosts": [], 00:13:21.240 "serial_number": "SPDK00000000000004", 00:13:21.240 "model_number": "SPDK bdev Controller", 00:13:21.240 "max_namespaces": 32, 00:13:21.240 "min_cntlid": 1, 00:13:21.240 "max_cntlid": 65519, 00:13:21.240 "namespaces": [ 00:13:21.240 { 00:13:21.240 "nsid": 1, 00:13:21.240 "bdev_name": "Null4", 00:13:21.240 "name": "Null4", 00:13:21.240 "nguid": "DFBE7AA362ED40ECA64A8DCC4FF209F1", 00:13:21.240 "uuid": "dfbe7aa3-62ed-40ec-a64a-8dcc4ff209f1" 00:13:21.240 } 00:13:21.240 ] 00:13:21.240 } 00:13:21.240 ] 00:13:21.240 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.240 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:21.240 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:21.240 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:21.240 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.240 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:21.240 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.240 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:21.500 rmmod nvme_tcp 00:13:21.500 rmmod nvme_fabrics 00:13:21.500 rmmod nvme_keyring 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 272947 ']' 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 272947 00:13:21.500 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 272947 ']' 00:13:21.501 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 272947 00:13:21.501 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:13:21.501 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:21.501 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 272947 00:13:21.501 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:21.501 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:21.501 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 272947' 00:13:21.501 killing process with pid 272947 00:13:21.501 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 272947 00:13:21.501 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 272947 00:13:21.760 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:21.760 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:21.760 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:21.760 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:13:21.760 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:13:21.760 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:21.760 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:13:21.760 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:21.760 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:21.760 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.760 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:21.760 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.298 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:24.298 00:13:24.298 real 0m10.038s 00:13:24.298 user 0m8.246s 00:13:24.298 sys 0m4.872s 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:24.299 ************************************ 00:13:24.299 END TEST nvmf_target_discovery 00:13:24.299 ************************************ 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:24.299 ************************************ 00:13:24.299 START TEST nvmf_referrals 00:13:24.299 ************************************ 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:24.299 * Looking for test storage... 00:13:24.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:24.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.299 --rc genhtml_branch_coverage=1 00:13:24.299 --rc genhtml_function_coverage=1 00:13:24.299 --rc genhtml_legend=1 00:13:24.299 --rc geninfo_all_blocks=1 00:13:24.299 --rc geninfo_unexecuted_blocks=1 00:13:24.299 00:13:24.299 ' 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:24.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.299 --rc genhtml_branch_coverage=1 00:13:24.299 --rc genhtml_function_coverage=1 00:13:24.299 --rc genhtml_legend=1 00:13:24.299 --rc geninfo_all_blocks=1 00:13:24.299 --rc geninfo_unexecuted_blocks=1 00:13:24.299 00:13:24.299 ' 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:24.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.299 --rc genhtml_branch_coverage=1 00:13:24.299 --rc genhtml_function_coverage=1 00:13:24.299 --rc genhtml_legend=1 00:13:24.299 --rc geninfo_all_blocks=1 00:13:24.299 --rc geninfo_unexecuted_blocks=1 00:13:24.299 00:13:24.299 ' 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:24.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.299 --rc genhtml_branch_coverage=1 00:13:24.299 --rc genhtml_function_coverage=1 00:13:24.299 --rc genhtml_legend=1 00:13:24.299 --rc geninfo_all_blocks=1 00:13:24.299 --rc geninfo_unexecuted_blocks=1 00:13:24.299 00:13:24.299 ' 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:24.299 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:24.300 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.300 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.300 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:24.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:24.300 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:24.300 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:24.300 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:24.300 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:24.300 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:24.300 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:24.300 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:24.300 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:24.300 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:24.300 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:24.300 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:24.300 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:24.300 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:24.300 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:24.300 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:24.300 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.300 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:24.300 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.300 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:24.300 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:24.300 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:13:24.300 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.873 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:30.873 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:13:30.873 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:30.873 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:30.873 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:30.873 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:30.874 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:30.874 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:30.874 Found net devices under 0000:af:00.0: cvl_0_0 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:30.874 Found net devices under 0000:af:00.1: cvl_0_1 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:30.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:30.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.496 ms 00:13:30.874 00:13:30.874 --- 10.0.0.2 ping statistics --- 00:13:30.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.874 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:30.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:30.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:13:30.874 00:13:30.874 --- 10.0.0.1 ping statistics --- 00:13:30.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.874 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:30.874 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:30.875 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:30.875 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:30.875 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:30.875 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:30.875 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:30.875 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.875 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=276958 00:13:30.875 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 276958 00:13:30.875 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:30.875 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 276958 ']' 00:13:30.875 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.875 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:30.875 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.875 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:30.875 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.875 [2024-12-05 20:33:23.457564] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:13:30.875 [2024-12-05 20:33:23.457616] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.875 [2024-12-05 20:33:23.537329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:30.875 [2024-12-05 20:33:23.576695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.875 [2024-12-05 20:33:23.576732] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.875 [2024-12-05 20:33:23.576738] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.875 [2024-12-05 20:33:23.576743] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.875 [2024-12-05 20:33:23.576749] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.875 [2024-12-05 20:33:23.578200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.875 [2024-12-05 20:33:23.578318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:30.875 [2024-12-05 20:33:23.578429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.875 [2024-12-05 20:33:23.578430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:30.875 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:30.875 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:13:30.875 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:30.875 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:30.875 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:31.134 [2024-12-05 20:33:24.321432] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:31.134 [2024-12-05 20:33:24.342185] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:31.134 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:31.393 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:31.393 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:31.393 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:31.393 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.393 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:31.393 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.393 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:31.393 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.393 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:31.393 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.393 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:31.393 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.393 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:31.393 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.393 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:31.393 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:31.393 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.393 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:31.393 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.393 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:31.393 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:31.393 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:31.393 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:31.393 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:31.393 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:31.393 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:31.653 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:31.653 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:31.653 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:31.653 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.653 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:31.653 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.653 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:31.653 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.653 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:31.653 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.653 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:31.653 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:31.653 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:31.653 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:31.653 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.653 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:31.653 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:31.653 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.653 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:31.653 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:31.653 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:31.653 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:31.653 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:31.653 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:31.653 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:31.653 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:31.911 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:31.911 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:31.911 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:31.911 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:31.911 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:31.911 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:31.911 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:31.911 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:32.169 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:32.169 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:32.169 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:32.169 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:32.169 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:32.169 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:32.169 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:32.169 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.169 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:32.169 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.169 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:32.169 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:32.169 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:32.169 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:32.169 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.169 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:32.169 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:32.169 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.428 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:32.428 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:32.428 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:32.428 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:32.428 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:32.428 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:32.428 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:32.428 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:32.428 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:32.428 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:32.428 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:32.428 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:32.428 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:32.428 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:32.428 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:32.687 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:32.687 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:32.687 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:32.687 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:32.687 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:32.687 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:32.687 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:32.687 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:32.687 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.687 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:32.687 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.687 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:32.687 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:32.687 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.687 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:32.687 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.946 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:32.946 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:32.946 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:32.946 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:32.946 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:32.946 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:32.946 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:32.946 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:32.946 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:32.946 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:32.946 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:32.946 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:32.946 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:32.946 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:32.946 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:32.946 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:32.946 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:32.946 rmmod nvme_tcp 00:13:32.946 rmmod nvme_fabrics 00:13:33.207 rmmod nvme_keyring 00:13:33.207 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:33.207 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:33.207 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:33.207 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 276958 ']' 00:13:33.207 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 276958 00:13:33.207 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 276958 ']' 00:13:33.207 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 276958 00:13:33.207 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:13:33.207 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:33.207 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 276958 00:13:33.207 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:33.207 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:33.207 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 276958' 00:13:33.207 killing process with pid 276958 00:13:33.207 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 276958 00:13:33.207 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 276958 00:13:33.207 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:33.207 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:33.207 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:33.207 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:13:33.467 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:13:33.467 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:13:33.467 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:33.467 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:33.467 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:33.467 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.467 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:33.467 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.376 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:35.376 00:13:35.376 real 0m11.508s 00:13:35.376 user 0m14.931s 00:13:35.376 sys 0m5.255s 00:13:35.376 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:35.376 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:35.376 ************************************ 00:13:35.376 END TEST nvmf_referrals 00:13:35.376 ************************************ 00:13:35.376 20:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:35.376 20:33:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:35.376 20:33:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:35.376 20:33:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:35.376 ************************************ 00:13:35.376 START TEST nvmf_connect_disconnect 00:13:35.376 ************************************ 00:13:35.376 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:35.637 * Looking for test storage... 00:13:35.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:35.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.637 --rc genhtml_branch_coverage=1 00:13:35.637 --rc genhtml_function_coverage=1 00:13:35.637 --rc genhtml_legend=1 00:13:35.637 --rc geninfo_all_blocks=1 00:13:35.637 --rc geninfo_unexecuted_blocks=1 00:13:35.637 00:13:35.637 ' 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:35.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.637 --rc genhtml_branch_coverage=1 00:13:35.637 --rc genhtml_function_coverage=1 00:13:35.637 --rc genhtml_legend=1 00:13:35.637 --rc geninfo_all_blocks=1 00:13:35.637 --rc geninfo_unexecuted_blocks=1 00:13:35.637 00:13:35.637 ' 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:35.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.637 --rc genhtml_branch_coverage=1 00:13:35.637 --rc genhtml_function_coverage=1 00:13:35.637 --rc genhtml_legend=1 00:13:35.637 --rc geninfo_all_blocks=1 00:13:35.637 --rc geninfo_unexecuted_blocks=1 00:13:35.637 00:13:35.637 ' 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:35.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.637 --rc genhtml_branch_coverage=1 00:13:35.637 --rc genhtml_function_coverage=1 00:13:35.637 --rc genhtml_legend=1 00:13:35.637 --rc geninfo_all_blocks=1 00:13:35.637 --rc geninfo_unexecuted_blocks=1 00:13:35.637 00:13:35.637 ' 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.637 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:35.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:35.638 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.638 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:35.638 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:35.638 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:35.638 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.638 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.638 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.638 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:35.638 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:35.638 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:35.638 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:42.216 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:42.216 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:42.216 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:42.217 Found net devices under 0000:af:00.0: cvl_0_0 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:42.217 Found net devices under 0000:af:00.1: cvl_0_1 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:42.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:42.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.492 ms 00:13:42.217 00:13:42.217 --- 10.0.0.2 ping statistics --- 00:13:42.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.217 rtt min/avg/max/mdev = 0.492/0.492/0.492/0.000 ms 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:42.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:42.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:13:42.217 00:13:42.217 --- 10.0.0.1 ping statistics --- 00:13:42.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.217 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=281333 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 281333 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 281333 ']' 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:42.217 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:42.217 [2024-12-05 20:33:35.039350] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:13:42.217 [2024-12-05 20:33:35.039399] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.217 [2024-12-05 20:33:35.116830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:42.217 [2024-12-05 20:33:35.156808] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.217 [2024-12-05 20:33:35.156844] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.217 [2024-12-05 20:33:35.156850] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:42.217 [2024-12-05 20:33:35.156856] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:42.217 [2024-12-05 20:33:35.156860] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.217 [2024-12-05 20:33:35.158277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.217 [2024-12-05 20:33:35.158390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:42.217 [2024-12-05 20:33:35.158500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.217 [2024-12-05 20:33:35.158501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:42.477 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:42.477 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:13:42.477 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:42.477 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:42.477 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:42.477 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:42.477 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:42.477 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.477 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:42.477 [2024-12-05 20:33:35.901284] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:42.477 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.477 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:42.477 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.477 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:42.737 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.737 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:42.737 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:42.737 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.737 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:42.737 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.737 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:42.737 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.737 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:42.737 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.737 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:42.737 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.737 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:42.737 [2024-12-05 20:33:35.963361] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.737 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.737 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:13:42.737 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:13:42.737 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:46.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:00.091 rmmod nvme_tcp 00:14:00.091 rmmod nvme_fabrics 00:14:00.091 rmmod nvme_keyring 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 281333 ']' 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 281333 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 281333 ']' 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 281333 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 281333 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 281333' 00:14:00.091 killing process with pid 281333 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 281333 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 281333 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:00.091 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.999 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:01.999 00:14:01.999 real 0m26.634s 00:14:01.999 user 1m14.134s 00:14:01.999 sys 0m5.876s 00:14:01.999 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:01.999 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:01.999 ************************************ 00:14:01.999 END TEST nvmf_connect_disconnect 00:14:01.999 ************************************ 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:02.259 ************************************ 00:14:02.259 START TEST nvmf_multitarget 00:14:02.259 ************************************ 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:02.259 * Looking for test storage... 00:14:02.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:02.259 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:02.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.259 --rc genhtml_branch_coverage=1 00:14:02.259 --rc genhtml_function_coverage=1 00:14:02.259 --rc genhtml_legend=1 00:14:02.259 --rc geninfo_all_blocks=1 00:14:02.259 --rc geninfo_unexecuted_blocks=1 00:14:02.259 00:14:02.259 ' 00:14:02.260 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:02.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.260 --rc genhtml_branch_coverage=1 00:14:02.260 --rc genhtml_function_coverage=1 00:14:02.260 --rc genhtml_legend=1 00:14:02.260 --rc geninfo_all_blocks=1 00:14:02.260 --rc geninfo_unexecuted_blocks=1 00:14:02.260 00:14:02.260 ' 00:14:02.260 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:02.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.260 --rc genhtml_branch_coverage=1 00:14:02.260 --rc genhtml_function_coverage=1 00:14:02.260 --rc genhtml_legend=1 00:14:02.260 --rc geninfo_all_blocks=1 00:14:02.260 --rc geninfo_unexecuted_blocks=1 00:14:02.260 00:14:02.260 ' 00:14:02.260 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:02.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.260 --rc genhtml_branch_coverage=1 00:14:02.260 --rc genhtml_function_coverage=1 00:14:02.260 --rc genhtml_legend=1 00:14:02.260 --rc geninfo_all_blocks=1 00:14:02.260 --rc geninfo_unexecuted_blocks=1 00:14:02.260 00:14:02.260 ' 00:14:02.260 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:02.260 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:02.260 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.260 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.260 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.260 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.260 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.260 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.260 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.260 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.260 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.260 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.260 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:02.260 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:02.260 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.260 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.260 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:02.260 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:02.260 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:02.260 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:02.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:14:02.519 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:09.104 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:09.104 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.104 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:09.105 Found net devices under 0000:af:00.0: cvl_0_0 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:09.105 Found net devices under 0000:af:00.1: cvl_0_1 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:09.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:09.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.495 ms 00:14:09.105 00:14:09.105 --- 10.0.0.2 ping statistics --- 00:14:09.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.105 rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:09.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:09.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:14:09.105 00:14:09.105 --- 10.0.0.1 ping statistics --- 00:14:09.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.105 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=288321 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 288321 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 288321 ']' 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:09.105 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:09.105 [2024-12-05 20:34:01.789688] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:14:09.105 [2024-12-05 20:34:01.789738] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.105 [2024-12-05 20:34:01.868207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:09.105 [2024-12-05 20:34:01.908530] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.105 [2024-12-05 20:34:01.908566] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.105 [2024-12-05 20:34:01.908572] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:09.105 [2024-12-05 20:34:01.908578] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:09.105 [2024-12-05 20:34:01.908582] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.105 [2024-12-05 20:34:01.909994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.105 [2024-12-05 20:34:01.910025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.105 [2024-12-05 20:34:01.910138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.105 [2024-12-05 20:34:01.910139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:09.374 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:09.374 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:14:09.374 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:09.374 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:09.374 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:09.374 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.374 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:09.374 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:09.374 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:09.374 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:09.374 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:09.640 "nvmf_tgt_1" 00:14:09.640 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:09.640 "nvmf_tgt_2" 00:14:09.640 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:09.640 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:09.640 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:09.641 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:09.901 true 00:14:09.901 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:09.901 true 00:14:09.901 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:09.901 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:10.167 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:10.167 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:10.167 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:10.167 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:10.167 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:14:10.167 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:10.167 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:14:10.167 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:10.167 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:10.167 rmmod nvme_tcp 00:14:10.167 rmmod nvme_fabrics 00:14:10.167 rmmod nvme_keyring 00:14:10.167 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:10.167 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:14:10.167 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:14:10.167 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 288321 ']' 00:14:10.167 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 288321 00:14:10.167 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 288321 ']' 00:14:10.167 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 288321 00:14:10.167 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:14:10.167 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:10.167 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 288321 00:14:10.167 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:10.167 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:10.167 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 288321' 00:14:10.167 killing process with pid 288321 00:14:10.167 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 288321 00:14:10.167 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 288321 00:14:10.426 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:10.426 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:10.426 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:10.426 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:14:10.426 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:14:10.426 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:10.426 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:14:10.426 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:10.426 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:10.426 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.426 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:10.426 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.335 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:12.335 00:14:12.335 real 0m10.229s 00:14:12.335 user 0m9.679s 00:14:12.335 sys 0m4.909s 00:14:12.335 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.335 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:12.335 ************************************ 00:14:12.335 END TEST nvmf_multitarget 00:14:12.335 ************************************ 00:14:12.335 20:34:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:12.335 20:34:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:12.335 20:34:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.336 20:34:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:12.595 ************************************ 00:14:12.595 START TEST nvmf_rpc 00:14:12.595 ************************************ 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:12.595 * Looking for test storage... 00:14:12.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:12.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.595 --rc genhtml_branch_coverage=1 00:14:12.595 --rc genhtml_function_coverage=1 00:14:12.595 --rc genhtml_legend=1 00:14:12.595 --rc geninfo_all_blocks=1 00:14:12.595 --rc geninfo_unexecuted_blocks=1 00:14:12.595 00:14:12.595 ' 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:12.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.595 --rc genhtml_branch_coverage=1 00:14:12.595 --rc genhtml_function_coverage=1 00:14:12.595 --rc genhtml_legend=1 00:14:12.595 --rc geninfo_all_blocks=1 00:14:12.595 --rc geninfo_unexecuted_blocks=1 00:14:12.595 00:14:12.595 ' 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:12.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.595 --rc genhtml_branch_coverage=1 00:14:12.595 --rc genhtml_function_coverage=1 00:14:12.595 --rc genhtml_legend=1 00:14:12.595 --rc geninfo_all_blocks=1 00:14:12.595 --rc geninfo_unexecuted_blocks=1 00:14:12.595 00:14:12.595 ' 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:12.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.595 --rc genhtml_branch_coverage=1 00:14:12.595 --rc genhtml_function_coverage=1 00:14:12.595 --rc genhtml_legend=1 00:14:12.595 --rc geninfo_all_blocks=1 00:14:12.595 --rc geninfo_unexecuted_blocks=1 00:14:12.595 00:14:12.595 ' 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.595 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:12.595 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:12.595 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.595 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.595 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:12.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:14:12.596 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.229 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:19.229 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:14:19.229 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:19.229 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:19.229 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:19.229 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:19.229 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:19.229 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:14:19.229 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:19.229 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:14:19.229 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:14:19.229 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:14:19.229 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:14:19.229 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:14:19.229 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:14:19.229 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:19.229 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:19.229 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:19.230 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:19.230 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:19.230 Found net devices under 0000:af:00.0: cvl_0_0 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:19.230 Found net devices under 0000:af:00.1: cvl_0_1 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:19.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:19.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.477 ms 00:14:19.230 00:14:19.230 --- 10.0.0.2 ping statistics --- 00:14:19.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.230 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:19.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:19.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:14:19.230 00:14:19.230 --- 10.0.0.1 ping statistics --- 00:14:19.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.230 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:19.230 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:19.230 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:19.230 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:19.230 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:19.230 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.230 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=292349 00:14:19.230 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 292349 00:14:19.230 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:19.230 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 292349 ']' 00:14:19.230 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.230 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:19.230 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.231 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:19.231 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.231 [2024-12-05 20:34:12.056216] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:14:19.231 [2024-12-05 20:34:12.056266] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.231 [2024-12-05 20:34:12.133021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:19.231 [2024-12-05 20:34:12.171392] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.231 [2024-12-05 20:34:12.171428] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.231 [2024-12-05 20:34:12.171434] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:19.231 [2024-12-05 20:34:12.171439] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:19.231 [2024-12-05 20:34:12.171444] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.231 [2024-12-05 20:34:12.172876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.231 [2024-12-05 20:34:12.172987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:19.231 [2024-12-05 20:34:12.173099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:19.231 [2024-12-05 20:34:12.173109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.490 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:19.490 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:19.490 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:19.490 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:19.490 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.490 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:19.490 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:19.490 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.490 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.749 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.749 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:19.749 "tick_rate": 2200000000, 00:14:19.749 "poll_groups": [ 00:14:19.749 { 00:14:19.749 "name": "nvmf_tgt_poll_group_000", 00:14:19.749 "admin_qpairs": 0, 00:14:19.749 "io_qpairs": 0, 00:14:19.749 "current_admin_qpairs": 0, 00:14:19.749 "current_io_qpairs": 0, 00:14:19.749 "pending_bdev_io": 0, 00:14:19.749 "completed_nvme_io": 0, 00:14:19.749 "transports": [] 00:14:19.749 }, 00:14:19.749 { 00:14:19.749 "name": "nvmf_tgt_poll_group_001", 00:14:19.749 "admin_qpairs": 0, 00:14:19.749 "io_qpairs": 0, 00:14:19.749 "current_admin_qpairs": 0, 00:14:19.749 "current_io_qpairs": 0, 00:14:19.749 "pending_bdev_io": 0, 00:14:19.749 "completed_nvme_io": 0, 00:14:19.749 "transports": [] 00:14:19.749 }, 00:14:19.749 { 00:14:19.749 "name": "nvmf_tgt_poll_group_002", 00:14:19.749 "admin_qpairs": 0, 00:14:19.749 "io_qpairs": 0, 00:14:19.749 "current_admin_qpairs": 0, 00:14:19.749 "current_io_qpairs": 0, 00:14:19.749 "pending_bdev_io": 0, 00:14:19.749 "completed_nvme_io": 0, 00:14:19.749 "transports": [] 00:14:19.749 }, 00:14:19.749 { 00:14:19.749 "name": "nvmf_tgt_poll_group_003", 00:14:19.749 "admin_qpairs": 0, 00:14:19.749 "io_qpairs": 0, 00:14:19.749 "current_admin_qpairs": 0, 00:14:19.749 "current_io_qpairs": 0, 00:14:19.749 "pending_bdev_io": 0, 00:14:19.749 "completed_nvme_io": 0, 00:14:19.749 "transports": [] 00:14:19.749 } 00:14:19.749 ] 00:14:19.749 }' 00:14:19.749 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:19.749 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:19.749 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:19.749 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:19.749 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:19.749 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:19.749 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:19.749 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:19.749 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.750 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.750 [2024-12-05 20:34:13.032417] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.750 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.750 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:19.750 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.750 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.750 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.750 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:19.750 "tick_rate": 2200000000, 00:14:19.750 "poll_groups": [ 00:14:19.750 { 00:14:19.750 "name": "nvmf_tgt_poll_group_000", 00:14:19.750 "admin_qpairs": 0, 00:14:19.750 "io_qpairs": 0, 00:14:19.750 "current_admin_qpairs": 0, 00:14:19.750 "current_io_qpairs": 0, 00:14:19.750 "pending_bdev_io": 0, 00:14:19.750 "completed_nvme_io": 0, 00:14:19.750 "transports": [ 00:14:19.750 { 00:14:19.750 "trtype": "TCP" 00:14:19.750 } 00:14:19.750 ] 00:14:19.750 }, 00:14:19.750 { 00:14:19.750 "name": "nvmf_tgt_poll_group_001", 00:14:19.750 "admin_qpairs": 0, 00:14:19.750 "io_qpairs": 0, 00:14:19.750 "current_admin_qpairs": 0, 00:14:19.750 "current_io_qpairs": 0, 00:14:19.750 "pending_bdev_io": 0, 00:14:19.750 "completed_nvme_io": 0, 00:14:19.750 "transports": [ 00:14:19.750 { 00:14:19.750 "trtype": "TCP" 00:14:19.750 } 00:14:19.750 ] 00:14:19.750 }, 00:14:19.750 { 00:14:19.750 "name": "nvmf_tgt_poll_group_002", 00:14:19.750 "admin_qpairs": 0, 00:14:19.750 "io_qpairs": 0, 00:14:19.750 "current_admin_qpairs": 0, 00:14:19.750 "current_io_qpairs": 0, 00:14:19.750 "pending_bdev_io": 0, 00:14:19.750 "completed_nvme_io": 0, 00:14:19.750 "transports": [ 00:14:19.750 { 00:14:19.750 "trtype": "TCP" 00:14:19.750 } 00:14:19.750 ] 00:14:19.750 }, 00:14:19.750 { 00:14:19.750 "name": "nvmf_tgt_poll_group_003", 00:14:19.750 "admin_qpairs": 0, 00:14:19.750 "io_qpairs": 0, 00:14:19.750 "current_admin_qpairs": 0, 00:14:19.750 "current_io_qpairs": 0, 00:14:19.750 "pending_bdev_io": 0, 00:14:19.750 "completed_nvme_io": 0, 00:14:19.750 "transports": [ 00:14:19.750 { 00:14:19.750 "trtype": "TCP" 00:14:19.750 } 00:14:19.750 ] 00:14:19.750 } 00:14:19.750 ] 00:14:19.750 }' 00:14:19.750 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:19.750 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:19.750 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:19.750 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:19.750 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:19.750 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:19.750 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:19.750 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:19.750 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:19.750 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:19.750 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:19.750 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:19.750 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:19.750 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:19.750 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.750 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.010 Malloc1 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.010 [2024-12-05 20:34:13.222128] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:14:20.010 [2024-12-05 20:34:13.250679] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562' 00:14:20.010 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:20.010 could not add new controller: failed to write to nvme-fabrics device 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:20.010 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.011 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.011 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.011 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:21.391 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:21.391 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:21.391 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:21.391 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:21.391 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:23.298 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:23.298 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:23.298 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:23.298 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:23.298 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:23.298 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:23.298 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:23.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.298 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:23.298 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:23.298 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:23.298 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:23.298 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:23.298 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:23.298 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:23.298 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:23.298 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.298 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.298 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.298 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:23.298 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:14:23.298 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:23.298 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:14:23.298 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:23.298 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:14:23.298 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:23.298 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:14:23.298 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:23.298 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:14:23.298 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:14:23.298 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:23.558 [2024-12-05 20:34:16.759164] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562' 00:14:23.558 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:23.558 could not add new controller: failed to write to nvme-fabrics device 00:14:23.558 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:14:23.558 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:23.558 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:23.558 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:23.558 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:23.558 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.558 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.558 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.558 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:24.937 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:24.937 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:24.937 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:24.937 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:24.937 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:26.843 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:26.843 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:26.843 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:26.843 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:26.843 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:26.843 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:26.843 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:26.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.843 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:26.843 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:27.103 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:27.103 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:27.103 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:27.103 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:27.103 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:27.103 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:27.103 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.103 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.103 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.103 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:27.103 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:27.103 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:27.103 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.103 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.103 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.103 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:27.103 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.103 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.103 [2024-12-05 20:34:20.344926] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:27.103 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.103 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:27.103 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.103 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.103 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.103 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:27.103 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.103 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.103 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.103 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:28.483 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:28.483 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:28.483 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:28.483 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:28.483 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:30.395 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:30.395 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:30.395 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:30.395 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:30.395 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:30.395 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:30.395 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:30.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.395 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:30.395 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:30.395 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:30.395 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:30.395 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:30.395 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:30.395 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:30.395 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:30.395 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.395 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.395 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.395 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:30.395 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.395 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.395 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.395 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:30.396 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:30.396 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.396 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.396 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.396 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:30.396 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.396 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.396 [2024-12-05 20:34:23.815852] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.396 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.396 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:30.396 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.396 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.396 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.396 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:30.396 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.396 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.669 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.669 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:32.053 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:32.053 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:32.053 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:32.053 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:32.053 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:33.959 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:33.959 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:33.959 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:33.959 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:33.959 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:33.959 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:33.959 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:33.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.959 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:33.959 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:33.960 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:33.960 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:33.960 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:33.960 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:33.960 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:33.960 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:33.960 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.960 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.960 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.960 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:33.960 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.960 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.960 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.960 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:33.960 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:33.960 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.960 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.960 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.960 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:33.960 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.960 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.960 [2024-12-05 20:34:27.301494] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:33.960 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.960 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:33.960 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.960 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.960 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.960 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:33.960 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.960 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.960 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.960 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:35.338 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:35.338 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:35.338 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:35.338 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:35.338 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:37.242 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:37.242 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:37.242 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:37.242 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:37.242 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:37.242 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:37.242 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:37.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.501 [2024-12-05 20:34:30.739340] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.501 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:38.878 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:38.878 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:38.878 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:38.878 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:38.878 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:40.787 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:40.787 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:40.787 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:40.787 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:40.787 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:40.787 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:40.787 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:40.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.787 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:40.787 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:40.787 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:40.787 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:40.787 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:40.787 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:41.047 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:41.047 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:41.047 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.047 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.047 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.047 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:41.047 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.047 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.047 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.047 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:41.047 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:41.047 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.047 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.047 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.047 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:41.047 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.047 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.047 [2024-12-05 20:34:34.268826] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.047 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.047 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:41.047 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.047 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.047 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.047 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:41.047 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.047 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.047 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.047 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:42.426 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:42.426 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:42.426 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:42.426 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:42.426 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:44.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.332 [2024-12-05 20:34:37.760151] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.332 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.592 [2024-12-05 20:34:37.808235] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.592 [2024-12-05 20:34:37.856360] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.592 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.593 [2024-12-05 20:34:37.904525] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:44.593 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:44.612 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.612 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.612 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.612 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:44.612 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.612 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.612 [2024-12-05 20:34:37.952683] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:44.612 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.612 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:44.612 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.612 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.612 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.612 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:44.612 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.612 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.612 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.612 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:44.612 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.612 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.612 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.612 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:44.612 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.612 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.612 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.612 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:44.612 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.612 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.612 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.612 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:44.612 "tick_rate": 2200000000, 00:14:44.612 "poll_groups": [ 00:14:44.612 { 00:14:44.612 "name": "nvmf_tgt_poll_group_000", 00:14:44.612 "admin_qpairs": 2, 00:14:44.612 "io_qpairs": 196, 00:14:44.612 "current_admin_qpairs": 0, 00:14:44.612 "current_io_qpairs": 0, 00:14:44.612 "pending_bdev_io": 0, 00:14:44.612 "completed_nvme_io": 247, 00:14:44.612 "transports": [ 00:14:44.612 { 00:14:44.612 "trtype": "TCP" 00:14:44.612 } 00:14:44.612 ] 00:14:44.612 }, 00:14:44.612 { 00:14:44.612 "name": "nvmf_tgt_poll_group_001", 00:14:44.612 "admin_qpairs": 2, 00:14:44.612 "io_qpairs": 196, 00:14:44.612 "current_admin_qpairs": 0, 00:14:44.612 "current_io_qpairs": 0, 00:14:44.612 "pending_bdev_io": 0, 00:14:44.612 "completed_nvme_io": 295, 00:14:44.612 "transports": [ 00:14:44.612 { 00:14:44.612 "trtype": "TCP" 00:14:44.612 } 00:14:44.612 ] 00:14:44.612 }, 00:14:44.612 { 00:14:44.612 "name": "nvmf_tgt_poll_group_002", 00:14:44.612 "admin_qpairs": 1, 00:14:44.612 "io_qpairs": 196, 00:14:44.612 "current_admin_qpairs": 0, 00:14:44.612 "current_io_qpairs": 0, 00:14:44.612 "pending_bdev_io": 0, 00:14:44.613 "completed_nvme_io": 246, 00:14:44.613 "transports": [ 00:14:44.613 { 00:14:44.613 "trtype": "TCP" 00:14:44.613 } 00:14:44.613 ] 00:14:44.613 }, 00:14:44.613 { 00:14:44.613 "name": "nvmf_tgt_poll_group_003", 00:14:44.613 "admin_qpairs": 2, 00:14:44.613 "io_qpairs": 196, 00:14:44.613 "current_admin_qpairs": 0, 00:14:44.613 "current_io_qpairs": 0, 00:14:44.613 "pending_bdev_io": 0, 00:14:44.613 "completed_nvme_io": 346, 00:14:44.613 "transports": [ 00:14:44.613 { 00:14:44.613 "trtype": "TCP" 00:14:44.613 } 00:14:44.613 ] 00:14:44.613 } 00:14:44.613 ] 00:14:44.613 }' 00:14:44.613 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:44.613 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:44.613 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:44.613 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:44.871 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:44.871 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:44.871 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:44.871 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:44.871 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:44.871 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:14:44.871 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:44.871 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:44.871 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:44.871 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:44.871 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:14:44.871 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:44.871 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:14:44.871 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:44.871 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:44.871 rmmod nvme_tcp 00:14:44.871 rmmod nvme_fabrics 00:14:44.871 rmmod nvme_keyring 00:14:44.871 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:44.871 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:14:44.871 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:14:44.871 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 292349 ']' 00:14:44.871 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 292349 00:14:44.871 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 292349 ']' 00:14:44.871 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 292349 00:14:44.871 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:14:44.871 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:44.871 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 292349 00:14:44.871 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:44.871 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:44.871 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 292349' 00:14:44.871 killing process with pid 292349 00:14:44.871 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 292349 00:14:44.871 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 292349 00:14:45.130 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:45.130 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:45.130 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:45.130 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:14:45.130 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:14:45.130 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:45.130 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:14:45.130 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:45.130 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:45.130 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.130 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:45.130 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:47.669 00:14:47.669 real 0m34.697s 00:14:47.669 user 1m46.316s 00:14:47.669 sys 0m6.704s 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.669 ************************************ 00:14:47.669 END TEST nvmf_rpc 00:14:47.669 ************************************ 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:47.669 ************************************ 00:14:47.669 START TEST nvmf_invalid 00:14:47.669 ************************************ 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:47.669 * Looking for test storage... 00:14:47.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:47.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.669 --rc genhtml_branch_coverage=1 00:14:47.669 --rc genhtml_function_coverage=1 00:14:47.669 --rc genhtml_legend=1 00:14:47.669 --rc geninfo_all_blocks=1 00:14:47.669 --rc geninfo_unexecuted_blocks=1 00:14:47.669 00:14:47.669 ' 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:47.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.669 --rc genhtml_branch_coverage=1 00:14:47.669 --rc genhtml_function_coverage=1 00:14:47.669 --rc genhtml_legend=1 00:14:47.669 --rc geninfo_all_blocks=1 00:14:47.669 --rc geninfo_unexecuted_blocks=1 00:14:47.669 00:14:47.669 ' 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:47.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.669 --rc genhtml_branch_coverage=1 00:14:47.669 --rc genhtml_function_coverage=1 00:14:47.669 --rc genhtml_legend=1 00:14:47.669 --rc geninfo_all_blocks=1 00:14:47.669 --rc geninfo_unexecuted_blocks=1 00:14:47.669 00:14:47.669 ' 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:47.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.669 --rc genhtml_branch_coverage=1 00:14:47.669 --rc genhtml_function_coverage=1 00:14:47.669 --rc genhtml_legend=1 00:14:47.669 --rc geninfo_all_blocks=1 00:14:47.669 --rc geninfo_unexecuted_blocks=1 00:14:47.669 00:14:47.669 ' 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:47.669 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:47.670 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:14:47.670 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:54.244 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:54.244 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:54.244 Found net devices under 0000:af:00.0: cvl_0_0 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:54.244 Found net devices under 0000:af:00.1: cvl_0_1 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:54.244 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:54.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:54.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:14:54.245 00:14:54.245 --- 10.0.0.2 ping statistics --- 00:14:54.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.245 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:54.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:54.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:14:54.245 00:14:54.245 --- 10.0.0.1 ping statistics --- 00:14:54.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.245 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=300750 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 300750 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 300750 ']' 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:54.245 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:54.245 [2024-12-05 20:34:46.817147] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:14:54.245 [2024-12-05 20:34:46.817192] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.245 [2024-12-05 20:34:46.896011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:54.245 [2024-12-05 20:34:46.937749] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.245 [2024-12-05 20:34:46.937782] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.245 [2024-12-05 20:34:46.937788] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:54.245 [2024-12-05 20:34:46.937793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:54.245 [2024-12-05 20:34:46.937798] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.245 [2024-12-05 20:34:46.939471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.245 [2024-12-05 20:34:46.939502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.245 [2024-12-05 20:34:46.939615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.245 [2024-12-05 20:34:46.939616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:54.245 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:54.245 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:14:54.245 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:54.245 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:54.245 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:54.245 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.245 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:54.245 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode7815 00:14:54.506 [2024-12-05 20:34:47.818635] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:54.506 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:54.506 { 00:14:54.506 "nqn": "nqn.2016-06.io.spdk:cnode7815", 00:14:54.506 "tgt_name": "foobar", 00:14:54.506 "method": "nvmf_create_subsystem", 00:14:54.506 "req_id": 1 00:14:54.506 } 00:14:54.506 Got JSON-RPC error response 00:14:54.506 response: 00:14:54.506 { 00:14:54.506 "code": -32603, 00:14:54.506 "message": "Unable to find target foobar" 00:14:54.506 }' 00:14:54.506 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:54.506 { 00:14:54.506 "nqn": "nqn.2016-06.io.spdk:cnode7815", 00:14:54.506 "tgt_name": "foobar", 00:14:54.506 "method": "nvmf_create_subsystem", 00:14:54.506 "req_id": 1 00:14:54.506 } 00:14:54.506 Got JSON-RPC error response 00:14:54.506 response: 00:14:54.506 { 00:14:54.506 "code": -32603, 00:14:54.506 "message": "Unable to find target foobar" 00:14:54.506 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:54.506 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:54.506 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode25947 00:14:54.766 [2024-12-05 20:34:48.011303] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25947: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:54.766 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:54.766 { 00:14:54.766 "nqn": "nqn.2016-06.io.spdk:cnode25947", 00:14:54.766 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:54.766 "method": "nvmf_create_subsystem", 00:14:54.766 "req_id": 1 00:14:54.766 } 00:14:54.766 Got JSON-RPC error response 00:14:54.766 response: 00:14:54.766 { 00:14:54.766 "code": -32602, 00:14:54.766 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:54.766 }' 00:14:54.766 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:54.766 { 00:14:54.766 "nqn": "nqn.2016-06.io.spdk:cnode25947", 00:14:54.766 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:54.766 "method": "nvmf_create_subsystem", 00:14:54.766 "req_id": 1 00:14:54.766 } 00:14:54.766 Got JSON-RPC error response 00:14:54.766 response: 00:14:54.766 { 00:14:54.766 "code": -32602, 00:14:54.766 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:54.766 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:54.766 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:54.766 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode24036 00:14:54.767 [2024-12-05 20:34:48.199873] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24036: invalid model number 'SPDK_Controller' 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:55.027 { 00:14:55.027 "nqn": "nqn.2016-06.io.spdk:cnode24036", 00:14:55.027 "model_number": "SPDK_Controller\u001f", 00:14:55.027 "method": "nvmf_create_subsystem", 00:14:55.027 "req_id": 1 00:14:55.027 } 00:14:55.027 Got JSON-RPC error response 00:14:55.027 response: 00:14:55.027 { 00:14:55.027 "code": -32602, 00:14:55.027 "message": "Invalid MN SPDK_Controller\u001f" 00:14:55.027 }' 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:55.027 { 00:14:55.027 "nqn": "nqn.2016-06.io.spdk:cnode24036", 00:14:55.027 "model_number": "SPDK_Controller\u001f", 00:14:55.027 "method": "nvmf_create_subsystem", 00:14:55.027 "req_id": 1 00:14:55.027 } 00:14:55.027 Got JSON-RPC error response 00:14:55.027 response: 00:14:55.027 { 00:14:55.027 "code": -32602, 00:14:55.027 "message": "Invalid MN SPDK_Controller\u001f" 00:14:55.027 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:14:55.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ C == \- ]] 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'C fm5v,].Mkx6*4D'\''0mLe' 00:14:55.028 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'C fm5v,].Mkx6*4D'\''0mLe' nqn.2016-06.io.spdk:cnode9456 00:14:55.288 [2024-12-05 20:34:48.540966] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9456: invalid serial number 'C fm5v,].Mkx6*4D'0mLe' 00:14:55.288 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:55.288 { 00:14:55.288 "nqn": "nqn.2016-06.io.spdk:cnode9456", 00:14:55.288 "serial_number": "C fm5v,].Mkx6*4D'\''0mLe", 00:14:55.288 "method": "nvmf_create_subsystem", 00:14:55.288 "req_id": 1 00:14:55.288 } 00:14:55.288 Got JSON-RPC error response 00:14:55.288 response: 00:14:55.288 { 00:14:55.288 "code": -32602, 00:14:55.288 "message": "Invalid SN C fm5v,].Mkx6*4D'\''0mLe" 00:14:55.288 }' 00:14:55.288 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:55.288 { 00:14:55.288 "nqn": "nqn.2016-06.io.spdk:cnode9456", 00:14:55.288 "serial_number": "C fm5v,].Mkx6*4D'0mLe", 00:14:55.288 "method": "nvmf_create_subsystem", 00:14:55.288 "req_id": 1 00:14:55.288 } 00:14:55.288 Got JSON-RPC error response 00:14:55.288 response: 00:14:55.288 { 00:14:55.288 "code": -32602, 00:14:55.288 "message": "Invalid SN C fm5v,].Mkx6*4D'0mLe" 00:14:55.288 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:55.288 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:14:55.289 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.290 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:55.550 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:55.550 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:55.550 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.550 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.550 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:14:55.550 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:55.550 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:14:55.550 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.550 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.550 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:55.550 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:55.550 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:55.550 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 4 == \- ]] 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '4^?2eizHNjy?;j0o+u#6bt'\''ntnM|ygYA}Ac$Lcx}Q' 00:14:55.551 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '4^?2eizHNjy?;j0o+u#6bt'\''ntnM|ygYA}Ac$Lcx}Q' nqn.2016-06.io.spdk:cnode27753 00:14:55.811 [2024-12-05 20:34:49.002489] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27753: invalid model number '4^?2eizHNjy?;j0o+u#6bt'ntnM|ygYA}Ac$Lcx}Q' 00:14:55.812 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:55.812 { 00:14:55.812 "nqn": "nqn.2016-06.io.spdk:cnode27753", 00:14:55.812 "model_number": "4^?2eizHNjy?;j0o+u#6bt'\''ntnM|ygYA}Ac$Lcx}Q", 00:14:55.812 "method": "nvmf_create_subsystem", 00:14:55.812 "req_id": 1 00:14:55.812 } 00:14:55.812 Got JSON-RPC error response 00:14:55.812 response: 00:14:55.812 { 00:14:55.812 "code": -32602, 00:14:55.812 "message": "Invalid MN 4^?2eizHNjy?;j0o+u#6bt'\''ntnM|ygYA}Ac$Lcx}Q" 00:14:55.812 }' 00:14:55.812 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:55.812 { 00:14:55.812 "nqn": "nqn.2016-06.io.spdk:cnode27753", 00:14:55.812 "model_number": "4^?2eizHNjy?;j0o+u#6bt'ntnM|ygYA}Ac$Lcx}Q", 00:14:55.812 "method": "nvmf_create_subsystem", 00:14:55.812 "req_id": 1 00:14:55.812 } 00:14:55.812 Got JSON-RPC error response 00:14:55.812 response: 00:14:55.812 { 00:14:55.812 "code": -32602, 00:14:55.812 "message": "Invalid MN 4^?2eizHNjy?;j0o+u#6bt'ntnM|ygYA}Ac$Lcx}Q" 00:14:55.812 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:55.812 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:55.812 [2024-12-05 20:34:49.187143] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:55.812 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:56.071 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:56.071 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:56.071 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:56.071 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:56.071 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:56.331 [2024-12-05 20:34:49.569318] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:56.331 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:56.331 { 00:14:56.331 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:56.331 "listen_address": { 00:14:56.331 "trtype": "tcp", 00:14:56.331 "traddr": "", 00:14:56.331 "trsvcid": "4421" 00:14:56.331 }, 00:14:56.331 "method": "nvmf_subsystem_remove_listener", 00:14:56.331 "req_id": 1 00:14:56.331 } 00:14:56.331 Got JSON-RPC error response 00:14:56.331 response: 00:14:56.331 { 00:14:56.331 "code": -32602, 00:14:56.331 "message": "Invalid parameters" 00:14:56.331 }' 00:14:56.331 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:56.331 { 00:14:56.331 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:56.331 "listen_address": { 00:14:56.331 "trtype": "tcp", 00:14:56.331 "traddr": "", 00:14:56.331 "trsvcid": "4421" 00:14:56.331 }, 00:14:56.331 "method": "nvmf_subsystem_remove_listener", 00:14:56.331 "req_id": 1 00:14:56.331 } 00:14:56.331 Got JSON-RPC error response 00:14:56.331 response: 00:14:56.331 { 00:14:56.331 "code": -32602, 00:14:56.331 "message": "Invalid parameters" 00:14:56.331 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:56.331 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15899 -i 0 00:14:56.331 [2024-12-05 20:34:49.765929] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15899: invalid cntlid range [0-65519] 00:14:56.591 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:56.591 { 00:14:56.591 "nqn": "nqn.2016-06.io.spdk:cnode15899", 00:14:56.591 "min_cntlid": 0, 00:14:56.591 "method": "nvmf_create_subsystem", 00:14:56.591 "req_id": 1 00:14:56.591 } 00:14:56.591 Got JSON-RPC error response 00:14:56.591 response: 00:14:56.591 { 00:14:56.591 "code": -32602, 00:14:56.591 "message": "Invalid cntlid range [0-65519]" 00:14:56.591 }' 00:14:56.591 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:56.591 { 00:14:56.591 "nqn": "nqn.2016-06.io.spdk:cnode15899", 00:14:56.591 "min_cntlid": 0, 00:14:56.591 "method": "nvmf_create_subsystem", 00:14:56.591 "req_id": 1 00:14:56.591 } 00:14:56.591 Got JSON-RPC error response 00:14:56.591 response: 00:14:56.591 { 00:14:56.591 "code": -32602, 00:14:56.591 "message": "Invalid cntlid range [0-65519]" 00:14:56.591 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:56.591 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29415 -i 65520 00:14:56.591 [2024-12-05 20:34:49.962567] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29415: invalid cntlid range [65520-65519] 00:14:56.591 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:56.591 { 00:14:56.591 "nqn": "nqn.2016-06.io.spdk:cnode29415", 00:14:56.591 "min_cntlid": 65520, 00:14:56.591 "method": "nvmf_create_subsystem", 00:14:56.591 "req_id": 1 00:14:56.591 } 00:14:56.591 Got JSON-RPC error response 00:14:56.591 response: 00:14:56.591 { 00:14:56.591 "code": -32602, 00:14:56.591 "message": "Invalid cntlid range [65520-65519]" 00:14:56.591 }' 00:14:56.591 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:56.591 { 00:14:56.591 "nqn": "nqn.2016-06.io.spdk:cnode29415", 00:14:56.591 "min_cntlid": 65520, 00:14:56.591 "method": "nvmf_create_subsystem", 00:14:56.591 "req_id": 1 00:14:56.591 } 00:14:56.591 Got JSON-RPC error response 00:14:56.591 response: 00:14:56.591 { 00:14:56.591 "code": -32602, 00:14:56.591 "message": "Invalid cntlid range [65520-65519]" 00:14:56.591 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:56.591 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15710 -I 0 00:14:56.850 [2024-12-05 20:34:50.155273] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15710: invalid cntlid range [1-0] 00:14:56.850 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:56.850 { 00:14:56.850 "nqn": "nqn.2016-06.io.spdk:cnode15710", 00:14:56.850 "max_cntlid": 0, 00:14:56.850 "method": "nvmf_create_subsystem", 00:14:56.850 "req_id": 1 00:14:56.850 } 00:14:56.850 Got JSON-RPC error response 00:14:56.851 response: 00:14:56.851 { 00:14:56.851 "code": -32602, 00:14:56.851 "message": "Invalid cntlid range [1-0]" 00:14:56.851 }' 00:14:56.851 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:56.851 { 00:14:56.851 "nqn": "nqn.2016-06.io.spdk:cnode15710", 00:14:56.851 "max_cntlid": 0, 00:14:56.851 "method": "nvmf_create_subsystem", 00:14:56.851 "req_id": 1 00:14:56.851 } 00:14:56.851 Got JSON-RPC error response 00:14:56.851 response: 00:14:56.851 { 00:14:56.851 "code": -32602, 00:14:56.851 "message": "Invalid cntlid range [1-0]" 00:14:56.851 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:56.851 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode745 -I 65520 00:14:57.109 [2024-12-05 20:34:50.343888] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode745: invalid cntlid range [1-65520] 00:14:57.109 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:14:57.109 { 00:14:57.109 "nqn": "nqn.2016-06.io.spdk:cnode745", 00:14:57.109 "max_cntlid": 65520, 00:14:57.109 "method": "nvmf_create_subsystem", 00:14:57.109 "req_id": 1 00:14:57.109 } 00:14:57.109 Got JSON-RPC error response 00:14:57.109 response: 00:14:57.109 { 00:14:57.109 "code": -32602, 00:14:57.109 "message": "Invalid cntlid range [1-65520]" 00:14:57.109 }' 00:14:57.109 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:14:57.109 { 00:14:57.109 "nqn": "nqn.2016-06.io.spdk:cnode745", 00:14:57.109 "max_cntlid": 65520, 00:14:57.109 "method": "nvmf_create_subsystem", 00:14:57.109 "req_id": 1 00:14:57.109 } 00:14:57.109 Got JSON-RPC error response 00:14:57.109 response: 00:14:57.109 { 00:14:57.109 "code": -32602, 00:14:57.109 "message": "Invalid cntlid range [1-65520]" 00:14:57.109 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:57.109 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7448 -i 6 -I 5 00:14:57.109 [2024-12-05 20:34:50.536543] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7448: invalid cntlid range [6-5] 00:14:57.369 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:14:57.369 { 00:14:57.369 "nqn": "nqn.2016-06.io.spdk:cnode7448", 00:14:57.369 "min_cntlid": 6, 00:14:57.369 "max_cntlid": 5, 00:14:57.369 "method": "nvmf_create_subsystem", 00:14:57.369 "req_id": 1 00:14:57.369 } 00:14:57.369 Got JSON-RPC error response 00:14:57.369 response: 00:14:57.369 { 00:14:57.369 "code": -32602, 00:14:57.369 "message": "Invalid cntlid range [6-5]" 00:14:57.369 }' 00:14:57.369 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:14:57.369 { 00:14:57.369 "nqn": "nqn.2016-06.io.spdk:cnode7448", 00:14:57.369 "min_cntlid": 6, 00:14:57.369 "max_cntlid": 5, 00:14:57.369 "method": "nvmf_create_subsystem", 00:14:57.369 "req_id": 1 00:14:57.369 } 00:14:57.369 Got JSON-RPC error response 00:14:57.369 response: 00:14:57.369 { 00:14:57.369 "code": -32602, 00:14:57.369 "message": "Invalid cntlid range [6-5]" 00:14:57.369 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:57.369 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:57.369 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:57.369 { 00:14:57.369 "name": "foobar", 00:14:57.369 "method": "nvmf_delete_target", 00:14:57.369 "req_id": 1 00:14:57.369 } 00:14:57.369 Got JSON-RPC error response 00:14:57.369 response: 00:14:57.369 { 00:14:57.369 "code": -32602, 00:14:57.369 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:57.369 }' 00:14:57.369 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:57.369 { 00:14:57.369 "name": "foobar", 00:14:57.369 "method": "nvmf_delete_target", 00:14:57.369 "req_id": 1 00:14:57.369 } 00:14:57.369 Got JSON-RPC error response 00:14:57.369 response: 00:14:57.369 { 00:14:57.369 "code": -32602, 00:14:57.369 "message": "The specified target doesn't exist, cannot delete it." 00:14:57.369 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:57.369 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:57.369 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:57.369 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:57.369 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:14:57.369 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:57.369 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:14:57.369 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:57.369 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:57.369 rmmod nvme_tcp 00:14:57.369 rmmod nvme_fabrics 00:14:57.369 rmmod nvme_keyring 00:14:57.369 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:57.369 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:14:57.369 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:14:57.369 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 300750 ']' 00:14:57.369 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 300750 00:14:57.369 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 300750 ']' 00:14:57.369 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 300750 00:14:57.369 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:14:57.369 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:57.369 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 300750 00:14:57.369 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:57.369 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:57.369 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 300750' 00:14:57.369 killing process with pid 300750 00:14:57.369 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 300750 00:14:57.369 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 300750 00:14:57.629 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:57.629 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:57.629 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:57.629 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:14:57.629 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:57.629 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:14:57.629 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:14:57.629 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:57.629 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:57.629 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.629 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:57.629 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:00.199 00:15:00.199 real 0m12.440s 00:15:00.199 user 0m20.239s 00:15:00.199 sys 0m5.406s 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:00.199 ************************************ 00:15:00.199 END TEST nvmf_invalid 00:15:00.199 ************************************ 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:00.199 ************************************ 00:15:00.199 START TEST nvmf_connect_stress 00:15:00.199 ************************************ 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:00.199 * Looking for test storage... 00:15:00.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:15:00.199 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:00.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.200 --rc genhtml_branch_coverage=1 00:15:00.200 --rc genhtml_function_coverage=1 00:15:00.200 --rc genhtml_legend=1 00:15:00.200 --rc geninfo_all_blocks=1 00:15:00.200 --rc geninfo_unexecuted_blocks=1 00:15:00.200 00:15:00.200 ' 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:00.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.200 --rc genhtml_branch_coverage=1 00:15:00.200 --rc genhtml_function_coverage=1 00:15:00.200 --rc genhtml_legend=1 00:15:00.200 --rc geninfo_all_blocks=1 00:15:00.200 --rc geninfo_unexecuted_blocks=1 00:15:00.200 00:15:00.200 ' 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:00.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.200 --rc genhtml_branch_coverage=1 00:15:00.200 --rc genhtml_function_coverage=1 00:15:00.200 --rc genhtml_legend=1 00:15:00.200 --rc geninfo_all_blocks=1 00:15:00.200 --rc geninfo_unexecuted_blocks=1 00:15:00.200 00:15:00.200 ' 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:00.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.200 --rc genhtml_branch_coverage=1 00:15:00.200 --rc genhtml_function_coverage=1 00:15:00.200 --rc genhtml_legend=1 00:15:00.200 --rc geninfo_all_blocks=1 00:15:00.200 --rc geninfo_unexecuted_blocks=1 00:15:00.200 00:15:00.200 ' 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:00.200 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:15:00.200 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.775 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:06.775 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:15:06.775 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:06.775 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:06.775 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:06.775 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:06.775 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:06.775 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:15:06.775 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:06.775 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:15:06.775 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:06.776 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:06.776 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:06.776 Found net devices under 0000:af:00.0: cvl_0_0 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:06.776 Found net devices under 0000:af:00.1: cvl_0_1 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:06.776 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:06.776 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:06.776 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:06.776 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:06.776 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:06.776 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:06.776 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:06.776 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:06.776 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:06.776 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:06.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:06.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:15:06.776 00:15:06.776 --- 10.0.0.2 ping statistics --- 00:15:06.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.776 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:15:06.776 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:06.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:06.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:15:06.776 00:15:06.776 --- 10.0.0.1 ping statistics --- 00:15:06.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.776 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:15:06.776 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:06.776 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=305403 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 305403 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 305403 ']' 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.777 [2024-12-05 20:34:59.347364] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:15:06.777 [2024-12-05 20:34:59.347408] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.777 [2024-12-05 20:34:59.422417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:06.777 [2024-12-05 20:34:59.459011] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.777 [2024-12-05 20:34:59.459044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.777 [2024-12-05 20:34:59.459053] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:06.777 [2024-12-05 20:34:59.459062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:06.777 [2024-12-05 20:34:59.459066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.777 [2024-12-05 20:34:59.460396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:06.777 [2024-12-05 20:34:59.460510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.777 [2024-12-05 20:34:59.460511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.777 [2024-12-05 20:34:59.604510] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.777 [2024-12-05 20:34:59.624736] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.777 NULL1 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=305425 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:06.777 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:06.778 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.778 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.778 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.778 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.778 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:06.778 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.778 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.778 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.036 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.036 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:07.036 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.036 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.036 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.294 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.294 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:07.294 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.294 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.294 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.862 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.862 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:07.862 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.862 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.862 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:08.246 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.246 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:08.246 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.246 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.246 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:08.246 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.246 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:08.246 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.246 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.246 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:08.815 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.815 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:08.815 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.815 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.815 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:09.073 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.073 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:09.073 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:09.073 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.073 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:09.330 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.330 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:09.330 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:09.330 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.330 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:09.589 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.589 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:09.589 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:09.589 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.589 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:10.156 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.156 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:10.156 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:10.156 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.156 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:10.415 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.415 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:10.415 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:10.415 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.415 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:10.674 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.674 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:10.674 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:10.674 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.674 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:10.934 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.934 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:10.934 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:10.934 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.934 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:11.194 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.194 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:11.194 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:11.194 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.194 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:11.761 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.762 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:11.762 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:11.762 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.762 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.021 20:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.021 20:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:12.021 20:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:12.021 20:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.021 20:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.279 20:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.279 20:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:12.280 20:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:12.280 20:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.280 20:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.539 20:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.539 20:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:12.539 20:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:12.539 20:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.539 20:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.797 20:35:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.797 20:35:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:12.797 20:35:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:12.797 20:35:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.797 20:35:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:13.366 20:35:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.366 20:35:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:13.366 20:35:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.366 20:35:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.366 20:35:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:13.626 20:35:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.626 20:35:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:13.626 20:35:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.626 20:35:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.626 20:35:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:13.885 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.885 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:13.885 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.885 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.885 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:14.144 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.144 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:14.144 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:14.144 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.144 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:14.712 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.712 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:14.712 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:14.712 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.712 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:14.971 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.971 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:14.971 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:14.971 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.971 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:15.230 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.230 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:15.230 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.230 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.230 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:15.490 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.490 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:15.490 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.490 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.490 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:15.750 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.750 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:15.750 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.750 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.750 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:16.337 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.337 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:16.337 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:16.337 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.337 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:16.337 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:16.596 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.596 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 305425 00:15:16.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (305425) - No such process 00:15:16.596 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 305425 00:15:16.596 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:16.596 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:16.596 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:16.596 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:16.596 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:15:16.596 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:16.596 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:15:16.596 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:16.596 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:16.596 rmmod nvme_tcp 00:15:16.596 rmmod nvme_fabrics 00:15:16.596 rmmod nvme_keyring 00:15:16.596 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:16.596 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:15:16.596 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:15:16.596 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 305403 ']' 00:15:16.596 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 305403 00:15:16.596 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 305403 ']' 00:15:16.596 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 305403 00:15:16.596 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:15:16.596 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:16.596 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 305403 00:15:16.596 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:16.596 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:16.596 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 305403' 00:15:16.596 killing process with pid 305403 00:15:16.596 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 305403 00:15:16.596 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 305403 00:15:16.856 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:16.856 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:16.856 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:16.856 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:15:16.856 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:16.856 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:15:16.856 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:15:16.856 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:16.856 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:16.856 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.856 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:16.856 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.761 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:18.761 00:15:18.761 real 0m19.091s 00:15:18.761 user 0m41.474s 00:15:18.761 sys 0m6.719s 00:15:18.761 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.761 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:18.761 ************************************ 00:15:18.761 END TEST nvmf_connect_stress 00:15:18.761 ************************************ 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:19.019 ************************************ 00:15:19.019 START TEST nvmf_fused_ordering 00:15:19.019 ************************************ 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:19.019 * Looking for test storage... 00:15:19.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:19.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.019 --rc genhtml_branch_coverage=1 00:15:19.019 --rc genhtml_function_coverage=1 00:15:19.019 --rc genhtml_legend=1 00:15:19.019 --rc geninfo_all_blocks=1 00:15:19.019 --rc geninfo_unexecuted_blocks=1 00:15:19.019 00:15:19.019 ' 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:19.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.019 --rc genhtml_branch_coverage=1 00:15:19.019 --rc genhtml_function_coverage=1 00:15:19.019 --rc genhtml_legend=1 00:15:19.019 --rc geninfo_all_blocks=1 00:15:19.019 --rc geninfo_unexecuted_blocks=1 00:15:19.019 00:15:19.019 ' 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:19.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.019 --rc genhtml_branch_coverage=1 00:15:19.019 --rc genhtml_function_coverage=1 00:15:19.019 --rc genhtml_legend=1 00:15:19.019 --rc geninfo_all_blocks=1 00:15:19.019 --rc geninfo_unexecuted_blocks=1 00:15:19.019 00:15:19.019 ' 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:19.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.019 --rc genhtml_branch_coverage=1 00:15:19.019 --rc genhtml_function_coverage=1 00:15:19.019 --rc genhtml_legend=1 00:15:19.019 --rc geninfo_all_blocks=1 00:15:19.019 --rc geninfo_unexecuted_blocks=1 00:15:19.019 00:15:19.019 ' 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:19.019 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:19.020 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:19.020 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:15:19.278 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:19.278 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:19.278 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:19.278 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.278 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.278 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.278 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:19.278 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.278 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:15:19.278 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:19.278 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:19.278 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:19.278 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:19.278 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:19.278 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:19.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:19.279 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:19.279 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:19.279 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:19.279 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:19.279 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:19.279 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:19.279 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:19.279 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:19.279 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:19.279 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.279 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:19.279 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.279 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:19.279 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:19.279 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:15:19.279 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:25.856 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:25.856 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:15:25.856 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:25.856 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:25.856 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:25.856 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:25.856 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:25.856 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:15:25.856 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:25.856 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:15:25.856 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:15:25.856 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:15:25.856 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:15:25.856 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:15:25.856 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:15:25.856 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:25.856 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:25.856 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:25.856 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:25.856 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:25.856 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:25.856 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:25.857 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:25.857 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:25.857 Found net devices under 0000:af:00.0: cvl_0_0 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:25.857 Found net devices under 0000:af:00.1: cvl_0_1 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:25.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:25.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.486 ms 00:15:25.857 00:15:25.857 --- 10.0.0.2 ping statistics --- 00:15:25.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.857 rtt min/avg/max/mdev = 0.486/0.486/0.486/0.000 ms 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:25.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:25.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:15:25.857 00:15:25.857 --- 10.0.0.1 ping statistics --- 00:15:25.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.857 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=311003 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 311003 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 311003 ']' 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.857 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:25.858 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.858 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:25.858 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:25.858 [2024-12-05 20:35:18.494939] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:15:25.858 [2024-12-05 20:35:18.494978] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.858 [2024-12-05 20:35:18.569911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.858 [2024-12-05 20:35:18.608208] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.858 [2024-12-05 20:35:18.608249] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.858 [2024-12-05 20:35:18.608256] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.858 [2024-12-05 20:35:18.608261] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.858 [2024-12-05 20:35:18.608266] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.858 [2024-12-05 20:35:18.608748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.117 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:26.117 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:15:26.117 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:26.117 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:26.117 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:26.117 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:26.117 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:26.117 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.117 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:26.117 [2024-12-05 20:35:19.345559] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:26.117 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.117 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:26.118 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.118 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:26.118 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.118 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:26.118 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.118 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:26.118 [2024-12-05 20:35:19.365737] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:26.118 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.118 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:26.118 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.118 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:26.118 NULL1 00:15:26.118 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.118 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:26.118 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.118 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:26.118 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.118 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:26.118 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.118 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:26.118 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.118 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:26.118 [2024-12-05 20:35:19.423780] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:15:26.118 [2024-12-05 20:35:19.423810] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid311270 ] 00:15:26.377 Attached to nqn.2016-06.io.spdk:cnode1 00:15:26.377 Namespace ID: 1 size: 1GB 00:15:26.377 fused_ordering(0) 00:15:26.377 fused_ordering(1) 00:15:26.377 fused_ordering(2) 00:15:26.377 fused_ordering(3) 00:15:26.377 fused_ordering(4) 00:15:26.377 fused_ordering(5) 00:15:26.377 fused_ordering(6) 00:15:26.377 fused_ordering(7) 00:15:26.377 fused_ordering(8) 00:15:26.377 fused_ordering(9) 00:15:26.377 fused_ordering(10) 00:15:26.377 fused_ordering(11) 00:15:26.377 fused_ordering(12) 00:15:26.377 fused_ordering(13) 00:15:26.377 fused_ordering(14) 00:15:26.377 fused_ordering(15) 00:15:26.377 fused_ordering(16) 00:15:26.377 fused_ordering(17) 00:15:26.377 fused_ordering(18) 00:15:26.377 fused_ordering(19) 00:15:26.377 fused_ordering(20) 00:15:26.377 fused_ordering(21) 00:15:26.377 fused_ordering(22) 00:15:26.377 fused_ordering(23) 00:15:26.377 fused_ordering(24) 00:15:26.377 fused_ordering(25) 00:15:26.377 fused_ordering(26) 00:15:26.377 fused_ordering(27) 00:15:26.377 fused_ordering(28) 00:15:26.377 fused_ordering(29) 00:15:26.377 fused_ordering(30) 00:15:26.377 fused_ordering(31) 00:15:26.377 fused_ordering(32) 00:15:26.377 fused_ordering(33) 00:15:26.377 fused_ordering(34) 00:15:26.377 fused_ordering(35) 00:15:26.377 fused_ordering(36) 00:15:26.377 fused_ordering(37) 00:15:26.377 fused_ordering(38) 00:15:26.377 fused_ordering(39) 00:15:26.377 fused_ordering(40) 00:15:26.377 fused_ordering(41) 00:15:26.377 fused_ordering(42) 00:15:26.377 fused_ordering(43) 00:15:26.377 fused_ordering(44) 00:15:26.377 fused_ordering(45) 00:15:26.377 fused_ordering(46) 00:15:26.377 fused_ordering(47) 00:15:26.377 fused_ordering(48) 00:15:26.377 fused_ordering(49) 00:15:26.377 fused_ordering(50) 00:15:26.377 fused_ordering(51) 00:15:26.377 fused_ordering(52) 00:15:26.377 fused_ordering(53) 00:15:26.377 fused_ordering(54) 00:15:26.377 fused_ordering(55) 00:15:26.377 fused_ordering(56) 00:15:26.377 fused_ordering(57) 00:15:26.377 fused_ordering(58) 00:15:26.377 fused_ordering(59) 00:15:26.377 fused_ordering(60) 00:15:26.377 fused_ordering(61) 00:15:26.377 fused_ordering(62) 00:15:26.377 fused_ordering(63) 00:15:26.377 fused_ordering(64) 00:15:26.377 fused_ordering(65) 00:15:26.377 fused_ordering(66) 00:15:26.377 fused_ordering(67) 00:15:26.377 fused_ordering(68) 00:15:26.377 fused_ordering(69) 00:15:26.377 fused_ordering(70) 00:15:26.377 fused_ordering(71) 00:15:26.377 fused_ordering(72) 00:15:26.377 fused_ordering(73) 00:15:26.377 fused_ordering(74) 00:15:26.377 fused_ordering(75) 00:15:26.377 fused_ordering(76) 00:15:26.377 fused_ordering(77) 00:15:26.377 fused_ordering(78) 00:15:26.377 fused_ordering(79) 00:15:26.377 fused_ordering(80) 00:15:26.377 fused_ordering(81) 00:15:26.377 fused_ordering(82) 00:15:26.377 fused_ordering(83) 00:15:26.377 fused_ordering(84) 00:15:26.377 fused_ordering(85) 00:15:26.377 fused_ordering(86) 00:15:26.377 fused_ordering(87) 00:15:26.377 fused_ordering(88) 00:15:26.377 fused_ordering(89) 00:15:26.377 fused_ordering(90) 00:15:26.377 fused_ordering(91) 00:15:26.377 fused_ordering(92) 00:15:26.377 fused_ordering(93) 00:15:26.377 fused_ordering(94) 00:15:26.377 fused_ordering(95) 00:15:26.377 fused_ordering(96) 00:15:26.377 fused_ordering(97) 00:15:26.377 fused_ordering(98) 00:15:26.377 fused_ordering(99) 00:15:26.377 fused_ordering(100) 00:15:26.377 fused_ordering(101) 00:15:26.377 fused_ordering(102) 00:15:26.377 fused_ordering(103) 00:15:26.377 fused_ordering(104) 00:15:26.377 fused_ordering(105) 00:15:26.377 fused_ordering(106) 00:15:26.377 fused_ordering(107) 00:15:26.377 fused_ordering(108) 00:15:26.377 fused_ordering(109) 00:15:26.377 fused_ordering(110) 00:15:26.377 fused_ordering(111) 00:15:26.377 fused_ordering(112) 00:15:26.377 fused_ordering(113) 00:15:26.377 fused_ordering(114) 00:15:26.377 fused_ordering(115) 00:15:26.377 fused_ordering(116) 00:15:26.377 fused_ordering(117) 00:15:26.377 fused_ordering(118) 00:15:26.377 fused_ordering(119) 00:15:26.377 fused_ordering(120) 00:15:26.377 fused_ordering(121) 00:15:26.377 fused_ordering(122) 00:15:26.377 fused_ordering(123) 00:15:26.377 fused_ordering(124) 00:15:26.377 fused_ordering(125) 00:15:26.377 fused_ordering(126) 00:15:26.377 fused_ordering(127) 00:15:26.377 fused_ordering(128) 00:15:26.377 fused_ordering(129) 00:15:26.377 fused_ordering(130) 00:15:26.377 fused_ordering(131) 00:15:26.377 fused_ordering(132) 00:15:26.377 fused_ordering(133) 00:15:26.377 fused_ordering(134) 00:15:26.377 fused_ordering(135) 00:15:26.377 fused_ordering(136) 00:15:26.377 fused_ordering(137) 00:15:26.377 fused_ordering(138) 00:15:26.377 fused_ordering(139) 00:15:26.377 fused_ordering(140) 00:15:26.377 fused_ordering(141) 00:15:26.377 fused_ordering(142) 00:15:26.377 fused_ordering(143) 00:15:26.377 fused_ordering(144) 00:15:26.377 fused_ordering(145) 00:15:26.377 fused_ordering(146) 00:15:26.377 fused_ordering(147) 00:15:26.377 fused_ordering(148) 00:15:26.377 fused_ordering(149) 00:15:26.377 fused_ordering(150) 00:15:26.377 fused_ordering(151) 00:15:26.377 fused_ordering(152) 00:15:26.377 fused_ordering(153) 00:15:26.377 fused_ordering(154) 00:15:26.377 fused_ordering(155) 00:15:26.377 fused_ordering(156) 00:15:26.377 fused_ordering(157) 00:15:26.377 fused_ordering(158) 00:15:26.377 fused_ordering(159) 00:15:26.377 fused_ordering(160) 00:15:26.377 fused_ordering(161) 00:15:26.377 fused_ordering(162) 00:15:26.377 fused_ordering(163) 00:15:26.377 fused_ordering(164) 00:15:26.377 fused_ordering(165) 00:15:26.377 fused_ordering(166) 00:15:26.377 fused_ordering(167) 00:15:26.377 fused_ordering(168) 00:15:26.377 fused_ordering(169) 00:15:26.377 fused_ordering(170) 00:15:26.377 fused_ordering(171) 00:15:26.377 fused_ordering(172) 00:15:26.377 fused_ordering(173) 00:15:26.377 fused_ordering(174) 00:15:26.377 fused_ordering(175) 00:15:26.377 fused_ordering(176) 00:15:26.377 fused_ordering(177) 00:15:26.377 fused_ordering(178) 00:15:26.377 fused_ordering(179) 00:15:26.377 fused_ordering(180) 00:15:26.377 fused_ordering(181) 00:15:26.377 fused_ordering(182) 00:15:26.377 fused_ordering(183) 00:15:26.377 fused_ordering(184) 00:15:26.377 fused_ordering(185) 00:15:26.377 fused_ordering(186) 00:15:26.377 fused_ordering(187) 00:15:26.377 fused_ordering(188) 00:15:26.378 fused_ordering(189) 00:15:26.378 fused_ordering(190) 00:15:26.378 fused_ordering(191) 00:15:26.378 fused_ordering(192) 00:15:26.378 fused_ordering(193) 00:15:26.378 fused_ordering(194) 00:15:26.378 fused_ordering(195) 00:15:26.378 fused_ordering(196) 00:15:26.378 fused_ordering(197) 00:15:26.378 fused_ordering(198) 00:15:26.378 fused_ordering(199) 00:15:26.378 fused_ordering(200) 00:15:26.378 fused_ordering(201) 00:15:26.378 fused_ordering(202) 00:15:26.378 fused_ordering(203) 00:15:26.378 fused_ordering(204) 00:15:26.378 fused_ordering(205) 00:15:26.639 fused_ordering(206) 00:15:26.639 fused_ordering(207) 00:15:26.639 fused_ordering(208) 00:15:26.639 fused_ordering(209) 00:15:26.639 fused_ordering(210) 00:15:26.639 fused_ordering(211) 00:15:26.639 fused_ordering(212) 00:15:26.639 fused_ordering(213) 00:15:26.639 fused_ordering(214) 00:15:26.639 fused_ordering(215) 00:15:26.639 fused_ordering(216) 00:15:26.639 fused_ordering(217) 00:15:26.639 fused_ordering(218) 00:15:26.639 fused_ordering(219) 00:15:26.639 fused_ordering(220) 00:15:26.639 fused_ordering(221) 00:15:26.639 fused_ordering(222) 00:15:26.639 fused_ordering(223) 00:15:26.639 fused_ordering(224) 00:15:26.639 fused_ordering(225) 00:15:26.639 fused_ordering(226) 00:15:26.639 fused_ordering(227) 00:15:26.639 fused_ordering(228) 00:15:26.639 fused_ordering(229) 00:15:26.639 fused_ordering(230) 00:15:26.639 fused_ordering(231) 00:15:26.639 fused_ordering(232) 00:15:26.639 fused_ordering(233) 00:15:26.639 fused_ordering(234) 00:15:26.639 fused_ordering(235) 00:15:26.639 fused_ordering(236) 00:15:26.639 fused_ordering(237) 00:15:26.639 fused_ordering(238) 00:15:26.639 fused_ordering(239) 00:15:26.639 fused_ordering(240) 00:15:26.639 fused_ordering(241) 00:15:26.639 fused_ordering(242) 00:15:26.639 fused_ordering(243) 00:15:26.639 fused_ordering(244) 00:15:26.639 fused_ordering(245) 00:15:26.639 fused_ordering(246) 00:15:26.639 fused_ordering(247) 00:15:26.639 fused_ordering(248) 00:15:26.639 fused_ordering(249) 00:15:26.639 fused_ordering(250) 00:15:26.639 fused_ordering(251) 00:15:26.639 fused_ordering(252) 00:15:26.639 fused_ordering(253) 00:15:26.639 fused_ordering(254) 00:15:26.639 fused_ordering(255) 00:15:26.639 fused_ordering(256) 00:15:26.639 fused_ordering(257) 00:15:26.639 fused_ordering(258) 00:15:26.639 fused_ordering(259) 00:15:26.639 fused_ordering(260) 00:15:26.639 fused_ordering(261) 00:15:26.639 fused_ordering(262) 00:15:26.640 fused_ordering(263) 00:15:26.640 fused_ordering(264) 00:15:26.640 fused_ordering(265) 00:15:26.640 fused_ordering(266) 00:15:26.640 fused_ordering(267) 00:15:26.640 fused_ordering(268) 00:15:26.640 fused_ordering(269) 00:15:26.640 fused_ordering(270) 00:15:26.640 fused_ordering(271) 00:15:26.640 fused_ordering(272) 00:15:26.640 fused_ordering(273) 00:15:26.640 fused_ordering(274) 00:15:26.640 fused_ordering(275) 00:15:26.640 fused_ordering(276) 00:15:26.640 fused_ordering(277) 00:15:26.640 fused_ordering(278) 00:15:26.640 fused_ordering(279) 00:15:26.640 fused_ordering(280) 00:15:26.640 fused_ordering(281) 00:15:26.640 fused_ordering(282) 00:15:26.640 fused_ordering(283) 00:15:26.640 fused_ordering(284) 00:15:26.640 fused_ordering(285) 00:15:26.640 fused_ordering(286) 00:15:26.640 fused_ordering(287) 00:15:26.640 fused_ordering(288) 00:15:26.640 fused_ordering(289) 00:15:26.640 fused_ordering(290) 00:15:26.640 fused_ordering(291) 00:15:26.640 fused_ordering(292) 00:15:26.640 fused_ordering(293) 00:15:26.640 fused_ordering(294) 00:15:26.640 fused_ordering(295) 00:15:26.640 fused_ordering(296) 00:15:26.640 fused_ordering(297) 00:15:26.640 fused_ordering(298) 00:15:26.640 fused_ordering(299) 00:15:26.640 fused_ordering(300) 00:15:26.640 fused_ordering(301) 00:15:26.640 fused_ordering(302) 00:15:26.640 fused_ordering(303) 00:15:26.640 fused_ordering(304) 00:15:26.640 fused_ordering(305) 00:15:26.640 fused_ordering(306) 00:15:26.640 fused_ordering(307) 00:15:26.640 fused_ordering(308) 00:15:26.640 fused_ordering(309) 00:15:26.640 fused_ordering(310) 00:15:26.640 fused_ordering(311) 00:15:26.640 fused_ordering(312) 00:15:26.640 fused_ordering(313) 00:15:26.640 fused_ordering(314) 00:15:26.640 fused_ordering(315) 00:15:26.640 fused_ordering(316) 00:15:26.640 fused_ordering(317) 00:15:26.640 fused_ordering(318) 00:15:26.640 fused_ordering(319) 00:15:26.640 fused_ordering(320) 00:15:26.640 fused_ordering(321) 00:15:26.640 fused_ordering(322) 00:15:26.640 fused_ordering(323) 00:15:26.640 fused_ordering(324) 00:15:26.640 fused_ordering(325) 00:15:26.640 fused_ordering(326) 00:15:26.640 fused_ordering(327) 00:15:26.640 fused_ordering(328) 00:15:26.640 fused_ordering(329) 00:15:26.640 fused_ordering(330) 00:15:26.640 fused_ordering(331) 00:15:26.640 fused_ordering(332) 00:15:26.640 fused_ordering(333) 00:15:26.640 fused_ordering(334) 00:15:26.640 fused_ordering(335) 00:15:26.640 fused_ordering(336) 00:15:26.640 fused_ordering(337) 00:15:26.640 fused_ordering(338) 00:15:26.640 fused_ordering(339) 00:15:26.640 fused_ordering(340) 00:15:26.640 fused_ordering(341) 00:15:26.640 fused_ordering(342) 00:15:26.640 fused_ordering(343) 00:15:26.640 fused_ordering(344) 00:15:26.640 fused_ordering(345) 00:15:26.640 fused_ordering(346) 00:15:26.640 fused_ordering(347) 00:15:26.640 fused_ordering(348) 00:15:26.640 fused_ordering(349) 00:15:26.640 fused_ordering(350) 00:15:26.640 fused_ordering(351) 00:15:26.640 fused_ordering(352) 00:15:26.640 fused_ordering(353) 00:15:26.640 fused_ordering(354) 00:15:26.640 fused_ordering(355) 00:15:26.640 fused_ordering(356) 00:15:26.640 fused_ordering(357) 00:15:26.640 fused_ordering(358) 00:15:26.640 fused_ordering(359) 00:15:26.640 fused_ordering(360) 00:15:26.640 fused_ordering(361) 00:15:26.640 fused_ordering(362) 00:15:26.640 fused_ordering(363) 00:15:26.640 fused_ordering(364) 00:15:26.640 fused_ordering(365) 00:15:26.640 fused_ordering(366) 00:15:26.640 fused_ordering(367) 00:15:26.640 fused_ordering(368) 00:15:26.640 fused_ordering(369) 00:15:26.640 fused_ordering(370) 00:15:26.640 fused_ordering(371) 00:15:26.640 fused_ordering(372) 00:15:26.640 fused_ordering(373) 00:15:26.640 fused_ordering(374) 00:15:26.640 fused_ordering(375) 00:15:26.640 fused_ordering(376) 00:15:26.640 fused_ordering(377) 00:15:26.640 fused_ordering(378) 00:15:26.640 fused_ordering(379) 00:15:26.640 fused_ordering(380) 00:15:26.640 fused_ordering(381) 00:15:26.640 fused_ordering(382) 00:15:26.640 fused_ordering(383) 00:15:26.640 fused_ordering(384) 00:15:26.640 fused_ordering(385) 00:15:26.640 fused_ordering(386) 00:15:26.640 fused_ordering(387) 00:15:26.640 fused_ordering(388) 00:15:26.640 fused_ordering(389) 00:15:26.640 fused_ordering(390) 00:15:26.640 fused_ordering(391) 00:15:26.640 fused_ordering(392) 00:15:26.640 fused_ordering(393) 00:15:26.640 fused_ordering(394) 00:15:26.640 fused_ordering(395) 00:15:26.640 fused_ordering(396) 00:15:26.640 fused_ordering(397) 00:15:26.640 fused_ordering(398) 00:15:26.640 fused_ordering(399) 00:15:26.640 fused_ordering(400) 00:15:26.640 fused_ordering(401) 00:15:26.640 fused_ordering(402) 00:15:26.640 fused_ordering(403) 00:15:26.640 fused_ordering(404) 00:15:26.640 fused_ordering(405) 00:15:26.640 fused_ordering(406) 00:15:26.640 fused_ordering(407) 00:15:26.640 fused_ordering(408) 00:15:26.640 fused_ordering(409) 00:15:26.640 fused_ordering(410) 00:15:26.900 fused_ordering(411) 00:15:26.900 fused_ordering(412) 00:15:26.900 fused_ordering(413) 00:15:26.900 fused_ordering(414) 00:15:26.900 fused_ordering(415) 00:15:26.900 fused_ordering(416) 00:15:26.900 fused_ordering(417) 00:15:26.900 fused_ordering(418) 00:15:26.900 fused_ordering(419) 00:15:26.900 fused_ordering(420) 00:15:26.900 fused_ordering(421) 00:15:26.900 fused_ordering(422) 00:15:26.900 fused_ordering(423) 00:15:26.900 fused_ordering(424) 00:15:26.900 fused_ordering(425) 00:15:26.900 fused_ordering(426) 00:15:26.900 fused_ordering(427) 00:15:26.900 fused_ordering(428) 00:15:26.900 fused_ordering(429) 00:15:26.900 fused_ordering(430) 00:15:26.900 fused_ordering(431) 00:15:26.900 fused_ordering(432) 00:15:26.900 fused_ordering(433) 00:15:26.900 fused_ordering(434) 00:15:26.900 fused_ordering(435) 00:15:26.900 fused_ordering(436) 00:15:26.900 fused_ordering(437) 00:15:26.900 fused_ordering(438) 00:15:26.900 fused_ordering(439) 00:15:26.900 fused_ordering(440) 00:15:26.900 fused_ordering(441) 00:15:26.900 fused_ordering(442) 00:15:26.900 fused_ordering(443) 00:15:26.900 fused_ordering(444) 00:15:26.900 fused_ordering(445) 00:15:26.900 fused_ordering(446) 00:15:26.900 fused_ordering(447) 00:15:26.900 fused_ordering(448) 00:15:26.900 fused_ordering(449) 00:15:26.900 fused_ordering(450) 00:15:26.900 fused_ordering(451) 00:15:26.900 fused_ordering(452) 00:15:26.900 fused_ordering(453) 00:15:26.900 fused_ordering(454) 00:15:26.900 fused_ordering(455) 00:15:26.900 fused_ordering(456) 00:15:26.900 fused_ordering(457) 00:15:26.900 fused_ordering(458) 00:15:26.900 fused_ordering(459) 00:15:26.900 fused_ordering(460) 00:15:26.900 fused_ordering(461) 00:15:26.900 fused_ordering(462) 00:15:26.900 fused_ordering(463) 00:15:26.900 fused_ordering(464) 00:15:26.900 fused_ordering(465) 00:15:26.900 fused_ordering(466) 00:15:26.900 fused_ordering(467) 00:15:26.900 fused_ordering(468) 00:15:26.900 fused_ordering(469) 00:15:26.900 fused_ordering(470) 00:15:26.900 fused_ordering(471) 00:15:26.900 fused_ordering(472) 00:15:26.900 fused_ordering(473) 00:15:26.900 fused_ordering(474) 00:15:26.900 fused_ordering(475) 00:15:26.900 fused_ordering(476) 00:15:26.900 fused_ordering(477) 00:15:26.900 fused_ordering(478) 00:15:26.900 fused_ordering(479) 00:15:26.900 fused_ordering(480) 00:15:26.900 fused_ordering(481) 00:15:26.900 fused_ordering(482) 00:15:26.900 fused_ordering(483) 00:15:26.900 fused_ordering(484) 00:15:26.900 fused_ordering(485) 00:15:26.900 fused_ordering(486) 00:15:26.900 fused_ordering(487) 00:15:26.900 fused_ordering(488) 00:15:26.900 fused_ordering(489) 00:15:26.900 fused_ordering(490) 00:15:26.900 fused_ordering(491) 00:15:26.900 fused_ordering(492) 00:15:26.900 fused_ordering(493) 00:15:26.900 fused_ordering(494) 00:15:26.900 fused_ordering(495) 00:15:26.900 fused_ordering(496) 00:15:26.900 fused_ordering(497) 00:15:26.900 fused_ordering(498) 00:15:26.900 fused_ordering(499) 00:15:26.900 fused_ordering(500) 00:15:26.900 fused_ordering(501) 00:15:26.900 fused_ordering(502) 00:15:26.900 fused_ordering(503) 00:15:26.900 fused_ordering(504) 00:15:26.900 fused_ordering(505) 00:15:26.900 fused_ordering(506) 00:15:26.900 fused_ordering(507) 00:15:26.900 fused_ordering(508) 00:15:26.900 fused_ordering(509) 00:15:26.900 fused_ordering(510) 00:15:26.900 fused_ordering(511) 00:15:26.900 fused_ordering(512) 00:15:26.900 fused_ordering(513) 00:15:26.900 fused_ordering(514) 00:15:26.900 fused_ordering(515) 00:15:26.900 fused_ordering(516) 00:15:26.900 fused_ordering(517) 00:15:26.900 fused_ordering(518) 00:15:26.900 fused_ordering(519) 00:15:26.900 fused_ordering(520) 00:15:26.900 fused_ordering(521) 00:15:26.900 fused_ordering(522) 00:15:26.900 fused_ordering(523) 00:15:26.900 fused_ordering(524) 00:15:26.900 fused_ordering(525) 00:15:26.900 fused_ordering(526) 00:15:26.900 fused_ordering(527) 00:15:26.900 fused_ordering(528) 00:15:26.900 fused_ordering(529) 00:15:26.900 fused_ordering(530) 00:15:26.900 fused_ordering(531) 00:15:26.900 fused_ordering(532) 00:15:26.900 fused_ordering(533) 00:15:26.900 fused_ordering(534) 00:15:26.900 fused_ordering(535) 00:15:26.900 fused_ordering(536) 00:15:26.900 fused_ordering(537) 00:15:26.900 fused_ordering(538) 00:15:26.900 fused_ordering(539) 00:15:26.900 fused_ordering(540) 00:15:26.900 fused_ordering(541) 00:15:26.900 fused_ordering(542) 00:15:26.900 fused_ordering(543) 00:15:26.900 fused_ordering(544) 00:15:26.900 fused_ordering(545) 00:15:26.900 fused_ordering(546) 00:15:26.900 fused_ordering(547) 00:15:26.900 fused_ordering(548) 00:15:26.900 fused_ordering(549) 00:15:26.900 fused_ordering(550) 00:15:26.900 fused_ordering(551) 00:15:26.900 fused_ordering(552) 00:15:26.900 fused_ordering(553) 00:15:26.900 fused_ordering(554) 00:15:26.900 fused_ordering(555) 00:15:26.900 fused_ordering(556) 00:15:26.900 fused_ordering(557) 00:15:26.900 fused_ordering(558) 00:15:26.900 fused_ordering(559) 00:15:26.900 fused_ordering(560) 00:15:26.900 fused_ordering(561) 00:15:26.900 fused_ordering(562) 00:15:26.900 fused_ordering(563) 00:15:26.900 fused_ordering(564) 00:15:26.900 fused_ordering(565) 00:15:26.900 fused_ordering(566) 00:15:26.900 fused_ordering(567) 00:15:26.900 fused_ordering(568) 00:15:26.900 fused_ordering(569) 00:15:26.900 fused_ordering(570) 00:15:26.900 fused_ordering(571) 00:15:26.900 fused_ordering(572) 00:15:26.900 fused_ordering(573) 00:15:26.900 fused_ordering(574) 00:15:26.900 fused_ordering(575) 00:15:26.900 fused_ordering(576) 00:15:26.900 fused_ordering(577) 00:15:26.900 fused_ordering(578) 00:15:26.900 fused_ordering(579) 00:15:26.900 fused_ordering(580) 00:15:26.900 fused_ordering(581) 00:15:26.900 fused_ordering(582) 00:15:26.900 fused_ordering(583) 00:15:26.900 fused_ordering(584) 00:15:26.900 fused_ordering(585) 00:15:26.900 fused_ordering(586) 00:15:26.900 fused_ordering(587) 00:15:26.900 fused_ordering(588) 00:15:26.900 fused_ordering(589) 00:15:26.900 fused_ordering(590) 00:15:26.900 fused_ordering(591) 00:15:26.900 fused_ordering(592) 00:15:26.900 fused_ordering(593) 00:15:26.900 fused_ordering(594) 00:15:26.900 fused_ordering(595) 00:15:26.900 fused_ordering(596) 00:15:26.901 fused_ordering(597) 00:15:26.901 fused_ordering(598) 00:15:26.901 fused_ordering(599) 00:15:26.901 fused_ordering(600) 00:15:26.901 fused_ordering(601) 00:15:26.901 fused_ordering(602) 00:15:26.901 fused_ordering(603) 00:15:26.901 fused_ordering(604) 00:15:26.901 fused_ordering(605) 00:15:26.901 fused_ordering(606) 00:15:26.901 fused_ordering(607) 00:15:26.901 fused_ordering(608) 00:15:26.901 fused_ordering(609) 00:15:26.901 fused_ordering(610) 00:15:26.901 fused_ordering(611) 00:15:26.901 fused_ordering(612) 00:15:26.901 fused_ordering(613) 00:15:26.901 fused_ordering(614) 00:15:26.901 fused_ordering(615) 00:15:27.160 fused_ordering(616) 00:15:27.160 fused_ordering(617) 00:15:27.160 fused_ordering(618) 00:15:27.160 fused_ordering(619) 00:15:27.160 fused_ordering(620) 00:15:27.160 fused_ordering(621) 00:15:27.160 fused_ordering(622) 00:15:27.160 fused_ordering(623) 00:15:27.160 fused_ordering(624) 00:15:27.160 fused_ordering(625) 00:15:27.160 fused_ordering(626) 00:15:27.160 fused_ordering(627) 00:15:27.160 fused_ordering(628) 00:15:27.160 fused_ordering(629) 00:15:27.160 fused_ordering(630) 00:15:27.160 fused_ordering(631) 00:15:27.160 fused_ordering(632) 00:15:27.160 fused_ordering(633) 00:15:27.160 fused_ordering(634) 00:15:27.160 fused_ordering(635) 00:15:27.160 fused_ordering(636) 00:15:27.160 fused_ordering(637) 00:15:27.160 fused_ordering(638) 00:15:27.160 fused_ordering(639) 00:15:27.160 fused_ordering(640) 00:15:27.161 fused_ordering(641) 00:15:27.161 fused_ordering(642) 00:15:27.161 fused_ordering(643) 00:15:27.161 fused_ordering(644) 00:15:27.161 fused_ordering(645) 00:15:27.161 fused_ordering(646) 00:15:27.161 fused_ordering(647) 00:15:27.161 fused_ordering(648) 00:15:27.161 fused_ordering(649) 00:15:27.161 fused_ordering(650) 00:15:27.161 fused_ordering(651) 00:15:27.161 fused_ordering(652) 00:15:27.161 fused_ordering(653) 00:15:27.161 fused_ordering(654) 00:15:27.161 fused_ordering(655) 00:15:27.161 fused_ordering(656) 00:15:27.161 fused_ordering(657) 00:15:27.161 fused_ordering(658) 00:15:27.161 fused_ordering(659) 00:15:27.161 fused_ordering(660) 00:15:27.161 fused_ordering(661) 00:15:27.161 fused_ordering(662) 00:15:27.161 fused_ordering(663) 00:15:27.161 fused_ordering(664) 00:15:27.161 fused_ordering(665) 00:15:27.161 fused_ordering(666) 00:15:27.161 fused_ordering(667) 00:15:27.161 fused_ordering(668) 00:15:27.161 fused_ordering(669) 00:15:27.161 fused_ordering(670) 00:15:27.161 fused_ordering(671) 00:15:27.161 fused_ordering(672) 00:15:27.161 fused_ordering(673) 00:15:27.161 fused_ordering(674) 00:15:27.161 fused_ordering(675) 00:15:27.161 fused_ordering(676) 00:15:27.161 fused_ordering(677) 00:15:27.161 fused_ordering(678) 00:15:27.161 fused_ordering(679) 00:15:27.161 fused_ordering(680) 00:15:27.161 fused_ordering(681) 00:15:27.161 fused_ordering(682) 00:15:27.161 fused_ordering(683) 00:15:27.161 fused_ordering(684) 00:15:27.161 fused_ordering(685) 00:15:27.161 fused_ordering(686) 00:15:27.161 fused_ordering(687) 00:15:27.161 fused_ordering(688) 00:15:27.161 fused_ordering(689) 00:15:27.161 fused_ordering(690) 00:15:27.161 fused_ordering(691) 00:15:27.161 fused_ordering(692) 00:15:27.161 fused_ordering(693) 00:15:27.161 fused_ordering(694) 00:15:27.161 fused_ordering(695) 00:15:27.161 fused_ordering(696) 00:15:27.161 fused_ordering(697) 00:15:27.161 fused_ordering(698) 00:15:27.161 fused_ordering(699) 00:15:27.161 fused_ordering(700) 00:15:27.161 fused_ordering(701) 00:15:27.161 fused_ordering(702) 00:15:27.161 fused_ordering(703) 00:15:27.161 fused_ordering(704) 00:15:27.161 fused_ordering(705) 00:15:27.161 fused_ordering(706) 00:15:27.161 fused_ordering(707) 00:15:27.161 fused_ordering(708) 00:15:27.161 fused_ordering(709) 00:15:27.161 fused_ordering(710) 00:15:27.161 fused_ordering(711) 00:15:27.161 fused_ordering(712) 00:15:27.161 fused_ordering(713) 00:15:27.161 fused_ordering(714) 00:15:27.161 fused_ordering(715) 00:15:27.161 fused_ordering(716) 00:15:27.161 fused_ordering(717) 00:15:27.161 fused_ordering(718) 00:15:27.161 fused_ordering(719) 00:15:27.161 fused_ordering(720) 00:15:27.161 fused_ordering(721) 00:15:27.161 fused_ordering(722) 00:15:27.161 fused_ordering(723) 00:15:27.161 fused_ordering(724) 00:15:27.161 fused_ordering(725) 00:15:27.161 fused_ordering(726) 00:15:27.161 fused_ordering(727) 00:15:27.161 fused_ordering(728) 00:15:27.161 fused_ordering(729) 00:15:27.161 fused_ordering(730) 00:15:27.161 fused_ordering(731) 00:15:27.161 fused_ordering(732) 00:15:27.161 fused_ordering(733) 00:15:27.161 fused_ordering(734) 00:15:27.161 fused_ordering(735) 00:15:27.161 fused_ordering(736) 00:15:27.161 fused_ordering(737) 00:15:27.161 fused_ordering(738) 00:15:27.161 fused_ordering(739) 00:15:27.161 fused_ordering(740) 00:15:27.161 fused_ordering(741) 00:15:27.161 fused_ordering(742) 00:15:27.161 fused_ordering(743) 00:15:27.161 fused_ordering(744) 00:15:27.161 fused_ordering(745) 00:15:27.161 fused_ordering(746) 00:15:27.161 fused_ordering(747) 00:15:27.161 fused_ordering(748) 00:15:27.161 fused_ordering(749) 00:15:27.161 fused_ordering(750) 00:15:27.161 fused_ordering(751) 00:15:27.161 fused_ordering(752) 00:15:27.161 fused_ordering(753) 00:15:27.161 fused_ordering(754) 00:15:27.161 fused_ordering(755) 00:15:27.161 fused_ordering(756) 00:15:27.161 fused_ordering(757) 00:15:27.161 fused_ordering(758) 00:15:27.161 fused_ordering(759) 00:15:27.161 fused_ordering(760) 00:15:27.161 fused_ordering(761) 00:15:27.161 fused_ordering(762) 00:15:27.161 fused_ordering(763) 00:15:27.161 fused_ordering(764) 00:15:27.161 fused_ordering(765) 00:15:27.161 fused_ordering(766) 00:15:27.161 fused_ordering(767) 00:15:27.161 fused_ordering(768) 00:15:27.161 fused_ordering(769) 00:15:27.161 fused_ordering(770) 00:15:27.161 fused_ordering(771) 00:15:27.161 fused_ordering(772) 00:15:27.161 fused_ordering(773) 00:15:27.161 fused_ordering(774) 00:15:27.161 fused_ordering(775) 00:15:27.161 fused_ordering(776) 00:15:27.161 fused_ordering(777) 00:15:27.161 fused_ordering(778) 00:15:27.161 fused_ordering(779) 00:15:27.161 fused_ordering(780) 00:15:27.161 fused_ordering(781) 00:15:27.161 fused_ordering(782) 00:15:27.161 fused_ordering(783) 00:15:27.161 fused_ordering(784) 00:15:27.161 fused_ordering(785) 00:15:27.161 fused_ordering(786) 00:15:27.161 fused_ordering(787) 00:15:27.161 fused_ordering(788) 00:15:27.161 fused_ordering(789) 00:15:27.161 fused_ordering(790) 00:15:27.161 fused_ordering(791) 00:15:27.161 fused_ordering(792) 00:15:27.161 fused_ordering(793) 00:15:27.161 fused_ordering(794) 00:15:27.161 fused_ordering(795) 00:15:27.161 fused_ordering(796) 00:15:27.161 fused_ordering(797) 00:15:27.161 fused_ordering(798) 00:15:27.161 fused_ordering(799) 00:15:27.161 fused_ordering(800) 00:15:27.161 fused_ordering(801) 00:15:27.161 fused_ordering(802) 00:15:27.161 fused_ordering(803) 00:15:27.161 fused_ordering(804) 00:15:27.161 fused_ordering(805) 00:15:27.161 fused_ordering(806) 00:15:27.161 fused_ordering(807) 00:15:27.161 fused_ordering(808) 00:15:27.161 fused_ordering(809) 00:15:27.161 fused_ordering(810) 00:15:27.161 fused_ordering(811) 00:15:27.161 fused_ordering(812) 00:15:27.161 fused_ordering(813) 00:15:27.161 fused_ordering(814) 00:15:27.161 fused_ordering(815) 00:15:27.161 fused_ordering(816) 00:15:27.161 fused_ordering(817) 00:15:27.161 fused_ordering(818) 00:15:27.161 fused_ordering(819) 00:15:27.161 fused_ordering(820) 00:15:27.734 fused_ordering(821) 00:15:27.734 fused_ordering(822) 00:15:27.734 fused_ordering(823) 00:15:27.734 fused_ordering(824) 00:15:27.734 fused_ordering(825) 00:15:27.734 fused_ordering(826) 00:15:27.734 fused_ordering(827) 00:15:27.734 fused_ordering(828) 00:15:27.734 fused_ordering(829) 00:15:27.734 fused_ordering(830) 00:15:27.734 fused_ordering(831) 00:15:27.734 fused_ordering(832) 00:15:27.734 fused_ordering(833) 00:15:27.734 fused_ordering(834) 00:15:27.734 fused_ordering(835) 00:15:27.734 fused_ordering(836) 00:15:27.734 fused_ordering(837) 00:15:27.734 fused_ordering(838) 00:15:27.734 fused_ordering(839) 00:15:27.734 fused_ordering(840) 00:15:27.734 fused_ordering(841) 00:15:27.734 fused_ordering(842) 00:15:27.734 fused_ordering(843) 00:15:27.734 fused_ordering(844) 00:15:27.734 fused_ordering(845) 00:15:27.734 fused_ordering(846) 00:15:27.734 fused_ordering(847) 00:15:27.734 fused_ordering(848) 00:15:27.734 fused_ordering(849) 00:15:27.734 fused_ordering(850) 00:15:27.734 fused_ordering(851) 00:15:27.734 fused_ordering(852) 00:15:27.734 fused_ordering(853) 00:15:27.734 fused_ordering(854) 00:15:27.734 fused_ordering(855) 00:15:27.734 fused_ordering(856) 00:15:27.734 fused_ordering(857) 00:15:27.734 fused_ordering(858) 00:15:27.734 fused_ordering(859) 00:15:27.734 fused_ordering(860) 00:15:27.734 fused_ordering(861) 00:15:27.734 fused_ordering(862) 00:15:27.734 fused_ordering(863) 00:15:27.734 fused_ordering(864) 00:15:27.734 fused_ordering(865) 00:15:27.734 fused_ordering(866) 00:15:27.734 fused_ordering(867) 00:15:27.734 fused_ordering(868) 00:15:27.734 fused_ordering(869) 00:15:27.734 fused_ordering(870) 00:15:27.734 fused_ordering(871) 00:15:27.734 fused_ordering(872) 00:15:27.734 fused_ordering(873) 00:15:27.734 fused_ordering(874) 00:15:27.734 fused_ordering(875) 00:15:27.734 fused_ordering(876) 00:15:27.734 fused_ordering(877) 00:15:27.734 fused_ordering(878) 00:15:27.734 fused_ordering(879) 00:15:27.734 fused_ordering(880) 00:15:27.734 fused_ordering(881) 00:15:27.734 fused_ordering(882) 00:15:27.734 fused_ordering(883) 00:15:27.734 fused_ordering(884) 00:15:27.734 fused_ordering(885) 00:15:27.734 fused_ordering(886) 00:15:27.734 fused_ordering(887) 00:15:27.734 fused_ordering(888) 00:15:27.734 fused_ordering(889) 00:15:27.734 fused_ordering(890) 00:15:27.734 fused_ordering(891) 00:15:27.734 fused_ordering(892) 00:15:27.734 fused_ordering(893) 00:15:27.734 fused_ordering(894) 00:15:27.734 fused_ordering(895) 00:15:27.734 fused_ordering(896) 00:15:27.734 fused_ordering(897) 00:15:27.734 fused_ordering(898) 00:15:27.734 fused_ordering(899) 00:15:27.734 fused_ordering(900) 00:15:27.734 fused_ordering(901) 00:15:27.734 fused_ordering(902) 00:15:27.734 fused_ordering(903) 00:15:27.734 fused_ordering(904) 00:15:27.734 fused_ordering(905) 00:15:27.734 fused_ordering(906) 00:15:27.734 fused_ordering(907) 00:15:27.734 fused_ordering(908) 00:15:27.734 fused_ordering(909) 00:15:27.734 fused_ordering(910) 00:15:27.734 fused_ordering(911) 00:15:27.734 fused_ordering(912) 00:15:27.734 fused_ordering(913) 00:15:27.734 fused_ordering(914) 00:15:27.734 fused_ordering(915) 00:15:27.734 fused_ordering(916) 00:15:27.734 fused_ordering(917) 00:15:27.734 fused_ordering(918) 00:15:27.734 fused_ordering(919) 00:15:27.734 fused_ordering(920) 00:15:27.734 fused_ordering(921) 00:15:27.734 fused_ordering(922) 00:15:27.734 fused_ordering(923) 00:15:27.734 fused_ordering(924) 00:15:27.734 fused_ordering(925) 00:15:27.734 fused_ordering(926) 00:15:27.734 fused_ordering(927) 00:15:27.734 fused_ordering(928) 00:15:27.734 fused_ordering(929) 00:15:27.734 fused_ordering(930) 00:15:27.734 fused_ordering(931) 00:15:27.734 fused_ordering(932) 00:15:27.734 fused_ordering(933) 00:15:27.734 fused_ordering(934) 00:15:27.734 fused_ordering(935) 00:15:27.734 fused_ordering(936) 00:15:27.734 fused_ordering(937) 00:15:27.734 fused_ordering(938) 00:15:27.735 fused_ordering(939) 00:15:27.735 fused_ordering(940) 00:15:27.735 fused_ordering(941) 00:15:27.735 fused_ordering(942) 00:15:27.735 fused_ordering(943) 00:15:27.735 fused_ordering(944) 00:15:27.735 fused_ordering(945) 00:15:27.735 fused_ordering(946) 00:15:27.735 fused_ordering(947) 00:15:27.735 fused_ordering(948) 00:15:27.735 fused_ordering(949) 00:15:27.735 fused_ordering(950) 00:15:27.735 fused_ordering(951) 00:15:27.735 fused_ordering(952) 00:15:27.735 fused_ordering(953) 00:15:27.735 fused_ordering(954) 00:15:27.735 fused_ordering(955) 00:15:27.735 fused_ordering(956) 00:15:27.735 fused_ordering(957) 00:15:27.735 fused_ordering(958) 00:15:27.735 fused_ordering(959) 00:15:27.735 fused_ordering(960) 00:15:27.735 fused_ordering(961) 00:15:27.735 fused_ordering(962) 00:15:27.735 fused_ordering(963) 00:15:27.735 fused_ordering(964) 00:15:27.735 fused_ordering(965) 00:15:27.735 fused_ordering(966) 00:15:27.735 fused_ordering(967) 00:15:27.735 fused_ordering(968) 00:15:27.735 fused_ordering(969) 00:15:27.735 fused_ordering(970) 00:15:27.735 fused_ordering(971) 00:15:27.735 fused_ordering(972) 00:15:27.735 fused_ordering(973) 00:15:27.735 fused_ordering(974) 00:15:27.735 fused_ordering(975) 00:15:27.735 fused_ordering(976) 00:15:27.735 fused_ordering(977) 00:15:27.735 fused_ordering(978) 00:15:27.735 fused_ordering(979) 00:15:27.735 fused_ordering(980) 00:15:27.735 fused_ordering(981) 00:15:27.735 fused_ordering(982) 00:15:27.735 fused_ordering(983) 00:15:27.735 fused_ordering(984) 00:15:27.735 fused_ordering(985) 00:15:27.735 fused_ordering(986) 00:15:27.735 fused_ordering(987) 00:15:27.735 fused_ordering(988) 00:15:27.735 fused_ordering(989) 00:15:27.735 fused_ordering(990) 00:15:27.735 fused_ordering(991) 00:15:27.735 fused_ordering(992) 00:15:27.735 fused_ordering(993) 00:15:27.735 fused_ordering(994) 00:15:27.735 fused_ordering(995) 00:15:27.735 fused_ordering(996) 00:15:27.735 fused_ordering(997) 00:15:27.735 fused_ordering(998) 00:15:27.735 fused_ordering(999) 00:15:27.735 fused_ordering(1000) 00:15:27.735 fused_ordering(1001) 00:15:27.735 fused_ordering(1002) 00:15:27.735 fused_ordering(1003) 00:15:27.735 fused_ordering(1004) 00:15:27.735 fused_ordering(1005) 00:15:27.735 fused_ordering(1006) 00:15:27.735 fused_ordering(1007) 00:15:27.735 fused_ordering(1008) 00:15:27.735 fused_ordering(1009) 00:15:27.735 fused_ordering(1010) 00:15:27.735 fused_ordering(1011) 00:15:27.735 fused_ordering(1012) 00:15:27.735 fused_ordering(1013) 00:15:27.735 fused_ordering(1014) 00:15:27.735 fused_ordering(1015) 00:15:27.735 fused_ordering(1016) 00:15:27.735 fused_ordering(1017) 00:15:27.735 fused_ordering(1018) 00:15:27.735 fused_ordering(1019) 00:15:27.735 fused_ordering(1020) 00:15:27.735 fused_ordering(1021) 00:15:27.735 fused_ordering(1022) 00:15:27.735 fused_ordering(1023) 00:15:27.735 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:27.735 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:27.735 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:27.735 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:15:27.735 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:27.735 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:15:27.735 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:27.735 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:27.735 rmmod nvme_tcp 00:15:27.735 rmmod nvme_fabrics 00:15:27.735 rmmod nvme_keyring 00:15:27.735 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:27.735 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:15:27.735 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:15:27.735 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 311003 ']' 00:15:27.735 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 311003 00:15:27.735 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 311003 ']' 00:15:27.735 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 311003 00:15:27.735 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:15:27.735 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:27.735 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 311003 00:15:27.735 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:27.735 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:27.735 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 311003' 00:15:27.735 killing process with pid 311003 00:15:27.735 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 311003 00:15:27.735 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 311003 00:15:27.999 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:27.999 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:27.999 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:27.999 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:15:27.999 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:15:27.999 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:27.999 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:15:27.999 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:27.999 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:27.999 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.999 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:27.999 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.906 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:29.906 00:15:29.906 real 0m11.030s 00:15:29.906 user 0m5.641s 00:15:29.906 sys 0m5.401s 00:15:29.906 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:29.906 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:29.906 ************************************ 00:15:29.906 END TEST nvmf_fused_ordering 00:15:29.906 ************************************ 00:15:29.906 20:35:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:29.906 20:35:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:29.906 20:35:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:29.906 20:35:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:30.167 ************************************ 00:15:30.167 START TEST nvmf_ns_masking 00:15:30.167 ************************************ 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:30.167 * Looking for test storage... 00:15:30.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:30.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.167 --rc genhtml_branch_coverage=1 00:15:30.167 --rc genhtml_function_coverage=1 00:15:30.167 --rc genhtml_legend=1 00:15:30.167 --rc geninfo_all_blocks=1 00:15:30.167 --rc geninfo_unexecuted_blocks=1 00:15:30.167 00:15:30.167 ' 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:30.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.167 --rc genhtml_branch_coverage=1 00:15:30.167 --rc genhtml_function_coverage=1 00:15:30.167 --rc genhtml_legend=1 00:15:30.167 --rc geninfo_all_blocks=1 00:15:30.167 --rc geninfo_unexecuted_blocks=1 00:15:30.167 00:15:30.167 ' 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:30.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.167 --rc genhtml_branch_coverage=1 00:15:30.167 --rc genhtml_function_coverage=1 00:15:30.167 --rc genhtml_legend=1 00:15:30.167 --rc geninfo_all_blocks=1 00:15:30.167 --rc geninfo_unexecuted_blocks=1 00:15:30.167 00:15:30.167 ' 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:30.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.167 --rc genhtml_branch_coverage=1 00:15:30.167 --rc genhtml_function_coverage=1 00:15:30.167 --rc genhtml_legend=1 00:15:30.167 --rc geninfo_all_blocks=1 00:15:30.167 --rc geninfo_unexecuted_blocks=1 00:15:30.167 00:15:30.167 ' 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.167 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:30.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=4cfbd1df-3208-4f71-9f9d-b39cc188d100 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=705cee41-b13b-45a9-af52-0b027784d747 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=5a73c907-486a-4d35-8cf5-49aa254e9db7 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:15:30.168 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:36.747 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:36.747 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:36.747 Found net devices under 0000:af:00.0: cvl_0_0 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:36.747 Found net devices under 0000:af:00.1: cvl_0_1 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:36.747 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:36.748 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:36.748 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:36.748 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:36.748 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:36.748 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:36.748 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:36.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:36.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:15:36.748 00:15:36.748 --- 10.0.0.2 ping statistics --- 00:15:36.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.748 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:15:36.748 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:36.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:36.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:15:36.748 00:15:36.748 --- 10.0.0.1 ping statistics --- 00:15:36.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.748 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:15:36.748 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:36.748 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:15:36.748 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:36.748 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:36.748 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:36.748 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:36.748 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:36.748 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:36.748 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:36.748 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:36.748 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:36.748 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:36.748 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:36.748 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=315170 00:15:36.748 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 315170 00:15:36.748 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:36.748 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 315170 ']' 00:15:36.748 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.748 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:36.748 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.748 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:36.748 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:36.748 [2024-12-05 20:35:29.636324] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:15:36.748 [2024-12-05 20:35:29.636365] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.748 [2024-12-05 20:35:29.712776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.748 [2024-12-05 20:35:29.750784] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.748 [2024-12-05 20:35:29.750817] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.748 [2024-12-05 20:35:29.750823] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.748 [2024-12-05 20:35:29.750829] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.748 [2024-12-05 20:35:29.750833] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.748 [2024-12-05 20:35:29.751402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.315 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:37.315 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:37.315 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:37.315 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:37.315 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:37.315 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:37.315 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:37.315 [2024-12-05 20:35:30.648573] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:37.315 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:37.315 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:37.315 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:37.573 Malloc1 00:15:37.573 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:37.831 Malloc2 00:15:37.831 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:38.090 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:38.090 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:38.348 [2024-12-05 20:35:31.616225] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:38.348 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:38.348 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5a73c907-486a-4d35-8cf5-49aa254e9db7 -a 10.0.0.2 -s 4420 -i 4 00:15:38.348 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:38.348 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:38.348 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:38.348 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:38.348 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:40.885 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:40.885 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:40.885 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:40.885 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:40.885 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:40.885 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:40.885 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:40.885 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:40.885 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:40.885 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:40.885 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:40.885 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:40.885 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:40.885 [ 0]:0x1 00:15:40.885 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:40.885 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:40.885 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=35f1c9ff0c834636bfde366f0f75dca9 00:15:40.885 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 35f1c9ff0c834636bfde366f0f75dca9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:40.885 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:40.885 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:40.885 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:40.885 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:40.885 [ 0]:0x1 00:15:40.885 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:40.885 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:40.885 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=35f1c9ff0c834636bfde366f0f75dca9 00:15:40.885 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 35f1c9ff0c834636bfde366f0f75dca9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:40.885 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:40.885 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:40.885 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:40.885 [ 1]:0x2 00:15:40.885 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:40.885 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:40.885 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b06473aced5d42c685b3a6d9bbbc6aa9 00:15:40.885 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b06473aced5d42c685b3a6d9bbbc6aa9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:40.886 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:40.886 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:41.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.145 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:41.145 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:41.404 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:41.404 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5a73c907-486a-4d35-8cf5-49aa254e9db7 -a 10.0.0.2 -s 4420 -i 4 00:15:41.664 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:41.664 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:41.664 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:41.664 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:15:41.664 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:15:41.664 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:43.570 20:35:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:43.570 20:35:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:43.570 20:35:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:43.570 20:35:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:43.570 20:35:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:43.570 20:35:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:43.570 20:35:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:43.570 20:35:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:43.570 20:35:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:43.570 20:35:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:43.570 20:35:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:43.570 20:35:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:43.570 20:35:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:43.570 20:35:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:43.570 20:35:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:43.570 20:35:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:43.570 20:35:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:43.570 20:35:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:43.570 20:35:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:43.570 20:35:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:43.570 20:35:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:43.570 20:35:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:43.829 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:43.829 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:43.829 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:43.829 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:43.829 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:43.829 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:43.829 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:43.829 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:43.830 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:43.830 [ 0]:0x2 00:15:43.830 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:43.830 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:43.830 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b06473aced5d42c685b3a6d9bbbc6aa9 00:15:43.830 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b06473aced5d42c685b3a6d9bbbc6aa9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:43.830 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:43.830 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:43.830 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:43.830 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:44.089 [ 0]:0x1 00:15:44.089 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:44.089 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:44.089 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=35f1c9ff0c834636bfde366f0f75dca9 00:15:44.089 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 35f1c9ff0c834636bfde366f0f75dca9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.089 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:44.089 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:44.089 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:44.089 [ 1]:0x2 00:15:44.089 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:44.089 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:44.089 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b06473aced5d42c685b3a6d9bbbc6aa9 00:15:44.089 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b06473aced5d42c685b3a6d9bbbc6aa9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.089 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:44.348 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:44.348 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:44.348 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:44.348 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:44.348 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:44.348 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:44.348 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:44.348 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:44.348 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:44.348 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:44.348 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:44.348 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:44.348 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:44.348 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.348 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:44.348 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:44.348 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:44.348 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:44.348 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:44.348 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:44.348 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:44.348 [ 0]:0x2 00:15:44.348 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:44.348 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:44.348 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b06473aced5d42c685b3a6d9bbbc6aa9 00:15:44.348 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b06473aced5d42c685b3a6d9bbbc6aa9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.348 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:44.348 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:44.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.606 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:44.606 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:44.606 20:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5a73c907-486a-4d35-8cf5-49aa254e9db7 -a 10.0.0.2 -s 4420 -i 4 00:15:44.865 20:35:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:44.865 20:35:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:44.865 20:35:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:44.865 20:35:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:44.865 20:35:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:44.865 20:35:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:46.769 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:46.769 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:46.769 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:46.770 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:46.770 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:46.770 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:46.770 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:46.770 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:47.029 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:47.029 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:47.029 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:47.029 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:47.029 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:47.029 [ 0]:0x1 00:15:47.029 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:47.029 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:47.029 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=35f1c9ff0c834636bfde366f0f75dca9 00:15:47.029 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 35f1c9ff0c834636bfde366f0f75dca9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.029 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:47.029 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:47.029 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:47.289 [ 1]:0x2 00:15:47.289 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:47.289 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:47.289 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b06473aced5d42c685b3a6d9bbbc6aa9 00:15:47.289 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b06473aced5d42c685b3a6d9bbbc6aa9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.289 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:47.289 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:47.289 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:47.289 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:47.289 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:47.289 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:47.289 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:47.289 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:47.289 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:47.289 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:47.289 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:47.289 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:47.289 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:47.550 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:47.550 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.550 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:47.550 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:47.550 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:47.550 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:47.550 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:47.550 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:47.550 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:47.550 [ 0]:0x2 00:15:47.550 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:47.550 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:47.550 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b06473aced5d42c685b3a6d9bbbc6aa9 00:15:47.550 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b06473aced5d42c685b3a6d9bbbc6aa9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.550 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:47.550 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:47.550 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:47.550 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:47.550 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:47.550 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:47.550 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:47.550 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:47.550 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:47.550 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:47.550 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:47.550 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:47.550 [2024-12-05 20:35:40.982521] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:47.550 request: 00:15:47.550 { 00:15:47.550 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:47.550 "nsid": 2, 00:15:47.550 "host": "nqn.2016-06.io.spdk:host1", 00:15:47.550 "method": "nvmf_ns_remove_host", 00:15:47.550 "req_id": 1 00:15:47.550 } 00:15:47.550 Got JSON-RPC error response 00:15:47.550 response: 00:15:47.550 { 00:15:47.550 "code": -32602, 00:15:47.550 "message": "Invalid parameters" 00:15:47.550 } 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:47.811 [ 0]:0x2 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b06473aced5d42c685b3a6d9bbbc6aa9 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b06473aced5d42c685b3a6d9bbbc6aa9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:47.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=317354 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 317354 /var/tmp/host.sock 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 317354 ']' 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:47.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:47.811 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:48.071 [2024-12-05 20:35:41.286753] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:15:48.071 [2024-12-05 20:35:41.286795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317354 ] 00:15:48.071 [2024-12-05 20:35:41.359191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.071 [2024-12-05 20:35:41.397138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.330 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:48.330 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:48.330 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:48.589 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:48.589 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 4cfbd1df-3208-4f71-9f9d-b39cc188d100 00:15:48.589 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:48.589 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4CFBD1DF32084F719F9DB39CC188D100 -i 00:15:48.848 20:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 705cee41-b13b-45a9-af52-0b027784d747 00:15:48.848 20:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:48.848 20:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 705CEE41B13B45A9AF520B027784D747 -i 00:15:49.106 20:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:49.107 20:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:49.366 20:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:49.366 20:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:49.626 nvme0n1 00:15:49.626 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:49.626 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:50.196 nvme1n2 00:15:50.196 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:50.196 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:50.196 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:50.196 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:50.196 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:50.196 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:50.196 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:50.196 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:50.196 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:50.456 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 4cfbd1df-3208-4f71-9f9d-b39cc188d100 == \4\c\f\b\d\1\d\f\-\3\2\0\8\-\4\f\7\1\-\9\f\9\d\-\b\3\9\c\c\1\8\8\d\1\0\0 ]] 00:15:50.456 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:50.456 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:50.456 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:50.715 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 705cee41-b13b-45a9-af52-0b027784d747 == \7\0\5\c\e\e\4\1\-\b\1\3\b\-\4\5\a\9\-\a\f\5\2\-\0\b\0\2\7\7\8\4\d\7\4\7 ]] 00:15:50.715 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:50.974 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:50.974 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 4cfbd1df-3208-4f71-9f9d-b39cc188d100 00:15:50.974 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:50.974 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4CFBD1DF32084F719F9DB39CC188D100 00:15:50.974 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:50.974 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4CFBD1DF32084F719F9DB39CC188D100 00:15:50.974 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:50.974 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:50.974 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:50.974 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:50.974 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:50.974 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:50.974 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:50.974 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:50.974 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4CFBD1DF32084F719F9DB39CC188D100 00:15:51.234 [2024-12-05 20:35:44.524161] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:15:51.234 [2024-12-05 20:35:44.524193] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:15:51.234 [2024-12-05 20:35:44.524204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.234 request: 00:15:51.234 { 00:15:51.234 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:51.234 "namespace": { 00:15:51.234 "bdev_name": "invalid", 00:15:51.234 "nsid": 1, 00:15:51.234 "nguid": "4CFBD1DF32084F719F9DB39CC188D100", 00:15:51.234 "no_auto_visible": false, 00:15:51.234 "hide_metadata": false 00:15:51.234 }, 00:15:51.234 "method": "nvmf_subsystem_add_ns", 00:15:51.234 "req_id": 1 00:15:51.234 } 00:15:51.234 Got JSON-RPC error response 00:15:51.234 response: 00:15:51.234 { 00:15:51.234 "code": -32602, 00:15:51.234 "message": "Invalid parameters" 00:15:51.234 } 00:15:51.234 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:51.234 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:51.234 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:51.234 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:51.234 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 4cfbd1df-3208-4f71-9f9d-b39cc188d100 00:15:51.234 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:51.234 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4CFBD1DF32084F719F9DB39CC188D100 -i 00:15:51.494 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:15:53.401 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:15:53.401 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:15:53.401 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:53.662 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:15:53.662 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 317354 00:15:53.662 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 317354 ']' 00:15:53.662 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 317354 00:15:53.662 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:53.662 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:53.662 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 317354 00:15:53.662 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:53.662 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:53.662 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 317354' 00:15:53.662 killing process with pid 317354 00:15:53.662 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 317354 00:15:53.662 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 317354 00:15:53.922 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:54.181 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:15:54.181 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:15:54.181 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:54.181 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:15:54.181 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:54.181 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:15:54.181 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:54.181 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:54.181 rmmod nvme_tcp 00:15:54.181 rmmod nvme_fabrics 00:15:54.181 rmmod nvme_keyring 00:15:54.181 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:54.181 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:15:54.181 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:15:54.181 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 315170 ']' 00:15:54.181 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 315170 00:15:54.181 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 315170 ']' 00:15:54.181 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 315170 00:15:54.181 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:54.181 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:54.181 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 315170 00:15:54.181 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:54.181 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:54.181 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 315170' 00:15:54.181 killing process with pid 315170 00:15:54.181 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 315170 00:15:54.181 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 315170 00:15:54.441 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:54.441 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:54.441 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:54.441 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:15:54.441 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:15:54.441 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:54.441 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:15:54.441 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:54.441 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:54.441 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.441 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:54.441 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.982 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:56.982 00:15:56.982 real 0m26.490s 00:15:56.982 user 0m31.165s 00:15:56.982 sys 0m7.055s 00:15:56.982 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:56.982 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:56.982 ************************************ 00:15:56.982 END TEST nvmf_ns_masking 00:15:56.982 ************************************ 00:15:56.982 20:35:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:56.982 20:35:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:56.982 20:35:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:56.982 20:35:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:56.982 20:35:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:56.982 ************************************ 00:15:56.982 START TEST nvmf_nvme_cli 00:15:56.982 ************************************ 00:15:56.982 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:56.982 * Looking for test storage... 00:15:56.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:56.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.982 --rc genhtml_branch_coverage=1 00:15:56.982 --rc genhtml_function_coverage=1 00:15:56.982 --rc genhtml_legend=1 00:15:56.982 --rc geninfo_all_blocks=1 00:15:56.982 --rc geninfo_unexecuted_blocks=1 00:15:56.982 00:15:56.982 ' 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:56.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.982 --rc genhtml_branch_coverage=1 00:15:56.982 --rc genhtml_function_coverage=1 00:15:56.982 --rc genhtml_legend=1 00:15:56.982 --rc geninfo_all_blocks=1 00:15:56.982 --rc geninfo_unexecuted_blocks=1 00:15:56.982 00:15:56.982 ' 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:56.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.982 --rc genhtml_branch_coverage=1 00:15:56.982 --rc genhtml_function_coverage=1 00:15:56.982 --rc genhtml_legend=1 00:15:56.982 --rc geninfo_all_blocks=1 00:15:56.982 --rc geninfo_unexecuted_blocks=1 00:15:56.982 00:15:56.982 ' 00:15:56.982 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:56.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.982 --rc genhtml_branch_coverage=1 00:15:56.982 --rc genhtml_function_coverage=1 00:15:56.983 --rc genhtml_legend=1 00:15:56.983 --rc geninfo_all_blocks=1 00:15:56.983 --rc geninfo_unexecuted_blocks=1 00:15:56.983 00:15:56.983 ' 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:56.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:56.983 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:15:56.984 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:03.551 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:03.551 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:16:03.551 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:03.551 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:03.551 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:03.551 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:03.551 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:03.551 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:16:03.551 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:03.551 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:16:03.551 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:16:03.551 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:16:03.551 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:16:03.551 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:16:03.551 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:16:03.551 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:03.551 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:03.551 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:03.551 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:03.551 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:03.551 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:03.552 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:03.552 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:03.552 Found net devices under 0000:af:00.0: cvl_0_0 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:03.552 Found net devices under 0000:af:00.1: cvl_0_1 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:03.552 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:03.552 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:03.552 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:03.552 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:03.552 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:03.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:03.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:16:03.552 00:16:03.552 --- 10.0.0.2 ping statistics --- 00:16:03.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.552 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:16:03.552 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:03.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:03.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:16:03.552 00:16:03.552 --- 10.0.0.1 ping statistics --- 00:16:03.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.552 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:16:03.552 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:03.552 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:16:03.552 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:03.552 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:03.552 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:03.552 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:03.552 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:03.552 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:03.552 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:03.552 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:03.552 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:03.552 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:03.552 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:03.552 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=322248 00:16:03.552 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:03.552 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 322248 00:16:03.552 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 322248 ']' 00:16:03.552 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.552 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:03.552 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.553 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:03.553 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:03.553 [2024-12-05 20:35:56.176551] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:16:03.553 [2024-12-05 20:35:56.176592] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.553 [2024-12-05 20:35:56.250498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:03.553 [2024-12-05 20:35:56.289901] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.553 [2024-12-05 20:35:56.289936] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.553 [2024-12-05 20:35:56.289943] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:03.553 [2024-12-05 20:35:56.289949] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:03.553 [2024-12-05 20:35:56.289953] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.553 [2024-12-05 20:35:56.291519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.553 [2024-12-05 20:35:56.291632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.553 [2024-12-05 20:35:56.291735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.553 [2024-12-05 20:35:56.291736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:03.553 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:03.553 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:16:03.553 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:03.553 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:03.553 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:03.812 [2024-12-05 20:35:57.025902] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:03.812 Malloc0 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:03.812 Malloc1 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:03.812 [2024-12-05 20:35:57.119639] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.812 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:16:04.072 00:16:04.072 Discovery Log Number of Records 2, Generation counter 2 00:16:04.072 =====Discovery Log Entry 0====== 00:16:04.072 trtype: tcp 00:16:04.072 adrfam: ipv4 00:16:04.072 subtype: current discovery subsystem 00:16:04.072 treq: not required 00:16:04.072 portid: 0 00:16:04.072 trsvcid: 4420 00:16:04.072 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:04.072 traddr: 10.0.0.2 00:16:04.072 eflags: explicit discovery connections, duplicate discovery information 00:16:04.072 sectype: none 00:16:04.072 =====Discovery Log Entry 1====== 00:16:04.072 trtype: tcp 00:16:04.072 adrfam: ipv4 00:16:04.072 subtype: nvme subsystem 00:16:04.072 treq: not required 00:16:04.072 portid: 0 00:16:04.072 trsvcid: 4420 00:16:04.072 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:04.072 traddr: 10.0.0.2 00:16:04.072 eflags: none 00:16:04.072 sectype: none 00:16:04.072 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:04.072 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:04.072 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:04.072 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:04.072 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:04.072 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:04.072 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:04.072 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:04.072 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:04.072 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:04.072 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:05.450 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:05.450 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:16:05.450 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:05.450 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:16:05.450 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:16:05.450 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:16:07.355 /dev/nvme0n2 ]] 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:07.355 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:07.615 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.615 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:07.615 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:16:07.615 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:07.615 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:07.615 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:07.615 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:07.615 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:16:07.615 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:07.615 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:07.615 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.615 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:07.615 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.615 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:07.615 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:07.615 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:07.615 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:16:07.615 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:07.615 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:16:07.615 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:07.615 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:07.615 rmmod nvme_tcp 00:16:07.615 rmmod nvme_fabrics 00:16:07.615 rmmod nvme_keyring 00:16:07.615 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:07.615 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:16:07.615 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:16:07.615 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 322248 ']' 00:16:07.615 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 322248 00:16:07.615 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 322248 ']' 00:16:07.615 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 322248 00:16:07.615 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:16:07.615 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:07.615 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 322248 00:16:07.616 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:07.616 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:07.616 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 322248' 00:16:07.616 killing process with pid 322248 00:16:07.616 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 322248 00:16:07.616 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 322248 00:16:07.876 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:07.876 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:07.876 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:07.876 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:16:07.876 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:16:07.876 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:07.876 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:16:07.876 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:07.876 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:07.876 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.876 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:07.876 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:10.417 00:16:10.417 real 0m13.361s 00:16:10.417 user 0m21.346s 00:16:10.417 sys 0m5.193s 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:10.417 ************************************ 00:16:10.417 END TEST nvmf_nvme_cli 00:16:10.417 ************************************ 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:10.417 ************************************ 00:16:10.417 START TEST nvmf_vfio_user 00:16:10.417 ************************************ 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:10.417 * Looking for test storage... 00:16:10.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:10.417 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:10.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.418 --rc genhtml_branch_coverage=1 00:16:10.418 --rc genhtml_function_coverage=1 00:16:10.418 --rc genhtml_legend=1 00:16:10.418 --rc geninfo_all_blocks=1 00:16:10.418 --rc geninfo_unexecuted_blocks=1 00:16:10.418 00:16:10.418 ' 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:10.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.418 --rc genhtml_branch_coverage=1 00:16:10.418 --rc genhtml_function_coverage=1 00:16:10.418 --rc genhtml_legend=1 00:16:10.418 --rc geninfo_all_blocks=1 00:16:10.418 --rc geninfo_unexecuted_blocks=1 00:16:10.418 00:16:10.418 ' 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:10.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.418 --rc genhtml_branch_coverage=1 00:16:10.418 --rc genhtml_function_coverage=1 00:16:10.418 --rc genhtml_legend=1 00:16:10.418 --rc geninfo_all_blocks=1 00:16:10.418 --rc geninfo_unexecuted_blocks=1 00:16:10.418 00:16:10.418 ' 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:10.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.418 --rc genhtml_branch_coverage=1 00:16:10.418 --rc genhtml_function_coverage=1 00:16:10.418 --rc genhtml_legend=1 00:16:10.418 --rc geninfo_all_blocks=1 00:16:10.418 --rc geninfo_unexecuted_blocks=1 00:16:10.418 00:16:10.418 ' 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:10.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:10.418 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:10.419 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:10.419 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:10.419 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:10.419 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:10.419 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=323888 00:16:10.419 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 323888' 00:16:10.419 Process pid: 323888 00:16:10.419 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:10.419 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 323888 00:16:10.419 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:10.419 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 323888 ']' 00:16:10.419 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.419 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:10.419 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.419 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:10.419 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:10.419 [2024-12-05 20:36:03.631482] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:16:10.419 [2024-12-05 20:36:03.631529] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.419 [2024-12-05 20:36:03.704532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:10.419 [2024-12-05 20:36:03.744241] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:10.419 [2024-12-05 20:36:03.744277] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:10.419 [2024-12-05 20:36:03.744284] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:10.419 [2024-12-05 20:36:03.744289] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:10.419 [2024-12-05 20:36:03.744294] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:10.419 [2024-12-05 20:36:03.745775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.419 [2024-12-05 20:36:03.745888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:10.419 [2024-12-05 20:36:03.746002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.419 [2024-12-05 20:36:03.746003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:11.359 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:11.359 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:11.359 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:12.296 20:36:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:12.296 20:36:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:12.296 20:36:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:12.296 20:36:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:12.296 20:36:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:12.296 20:36:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:12.554 Malloc1 00:16:12.554 20:36:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:12.814 20:36:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:12.814 20:36:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:13.073 20:36:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:13.073 20:36:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:13.073 20:36:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:13.333 Malloc2 00:16:13.333 20:36:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:13.593 20:36:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:13.593 20:36:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:13.855 20:36:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:13.855 20:36:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:13.855 20:36:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:13.855 20:36:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:13.855 20:36:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:13.855 20:36:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:13.855 [2024-12-05 20:36:07.167552] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:16:13.855 [2024-12-05 20:36:07.167585] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid324508 ] 00:16:13.855 [2024-12-05 20:36:07.206300] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:13.855 [2024-12-05 20:36:07.212355] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:13.855 [2024-12-05 20:36:07.212374] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3162a52000 00:16:13.855 [2024-12-05 20:36:07.213350] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:13.855 [2024-12-05 20:36:07.214353] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:13.855 [2024-12-05 20:36:07.215362] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:13.855 [2024-12-05 20:36:07.216366] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:13.855 [2024-12-05 20:36:07.217371] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:13.855 [2024-12-05 20:36:07.218373] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:13.855 [2024-12-05 20:36:07.219382] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:13.855 [2024-12-05 20:36:07.220380] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:13.855 [2024-12-05 20:36:07.221390] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:13.855 [2024-12-05 20:36:07.221398] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3162a47000 00:16:13.855 [2024-12-05 20:36:07.222238] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:13.855 [2024-12-05 20:36:07.234808] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:13.855 [2024-12-05 20:36:07.234830] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:16:13.855 [2024-12-05 20:36:07.237482] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:13.855 [2024-12-05 20:36:07.237515] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:13.855 [2024-12-05 20:36:07.237579] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:16:13.855 [2024-12-05 20:36:07.237592] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:16:13.855 [2024-12-05 20:36:07.237597] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:16:13.855 [2024-12-05 20:36:07.238478] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:13.855 [2024-12-05 20:36:07.238485] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:16:13.855 [2024-12-05 20:36:07.238491] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:16:13.855 [2024-12-05 20:36:07.239482] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:13.855 [2024-12-05 20:36:07.239489] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:16:13.855 [2024-12-05 20:36:07.239495] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:13.855 [2024-12-05 20:36:07.240487] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:13.855 [2024-12-05 20:36:07.240495] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:13.855 [2024-12-05 20:36:07.241490] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:13.855 [2024-12-05 20:36:07.241497] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:13.855 [2024-12-05 20:36:07.241501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:13.855 [2024-12-05 20:36:07.241506] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:13.855 [2024-12-05 20:36:07.241612] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:16:13.855 [2024-12-05 20:36:07.241616] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:13.855 [2024-12-05 20:36:07.241620] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:13.855 [2024-12-05 20:36:07.245063] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:13.855 [2024-12-05 20:36:07.245512] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:13.855 [2024-12-05 20:36:07.246518] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:13.855 [2024-12-05 20:36:07.247515] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:13.855 [2024-12-05 20:36:07.247573] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:13.855 [2024-12-05 20:36:07.248525] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:13.855 [2024-12-05 20:36:07.248532] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:13.855 [2024-12-05 20:36:07.248536] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:13.855 [2024-12-05 20:36:07.248551] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:16:13.855 [2024-12-05 20:36:07.248559] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:13.855 [2024-12-05 20:36:07.248574] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:13.855 [2024-12-05 20:36:07.248578] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:13.856 [2024-12-05 20:36:07.248581] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:13.856 [2024-12-05 20:36:07.248594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:13.856 [2024-12-05 20:36:07.248628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:13.856 [2024-12-05 20:36:07.248637] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:16:13.856 [2024-12-05 20:36:07.248642] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:16:13.856 [2024-12-05 20:36:07.248647] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:16:13.856 [2024-12-05 20:36:07.248651] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:13.856 [2024-12-05 20:36:07.248655] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:16:13.856 [2024-12-05 20:36:07.248659] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:16:13.856 [2024-12-05 20:36:07.248663] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:16:13.856 [2024-12-05 20:36:07.248669] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:13.856 [2024-12-05 20:36:07.248678] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:13.856 [2024-12-05 20:36:07.248688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:13.856 [2024-12-05 20:36:07.248698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:13.856 [2024-12-05 20:36:07.248705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:13.856 [2024-12-05 20:36:07.248712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:13.856 [2024-12-05 20:36:07.248718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:13.856 [2024-12-05 20:36:07.248722] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:13.856 [2024-12-05 20:36:07.248728] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:13.856 [2024-12-05 20:36:07.248736] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:13.856 [2024-12-05 20:36:07.248748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:13.856 [2024-12-05 20:36:07.248752] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:16:13.856 [2024-12-05 20:36:07.248757] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:13.856 [2024-12-05 20:36:07.248762] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:16:13.856 [2024-12-05 20:36:07.248767] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:13.856 [2024-12-05 20:36:07.248774] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:13.856 [2024-12-05 20:36:07.248783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:13.856 [2024-12-05 20:36:07.248828] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:16:13.856 [2024-12-05 20:36:07.248836] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:13.856 [2024-12-05 20:36:07.248842] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:13.856 [2024-12-05 20:36:07.248846] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:13.856 [2024-12-05 20:36:07.248849] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:13.856 [2024-12-05 20:36:07.248854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:13.856 [2024-12-05 20:36:07.248867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:13.856 [2024-12-05 20:36:07.248875] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:16:13.856 [2024-12-05 20:36:07.248885] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:16:13.856 [2024-12-05 20:36:07.248891] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:13.856 [2024-12-05 20:36:07.248896] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:13.856 [2024-12-05 20:36:07.248900] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:13.856 [2024-12-05 20:36:07.248903] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:13.856 [2024-12-05 20:36:07.248907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:13.856 [2024-12-05 20:36:07.248925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:13.856 [2024-12-05 20:36:07.248935] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:13.856 [2024-12-05 20:36:07.248942] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:13.856 [2024-12-05 20:36:07.248947] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:13.856 [2024-12-05 20:36:07.248951] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:13.856 [2024-12-05 20:36:07.248954] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:13.856 [2024-12-05 20:36:07.248958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:13.856 [2024-12-05 20:36:07.248972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:13.856 [2024-12-05 20:36:07.248978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:13.856 [2024-12-05 20:36:07.248983] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:13.856 [2024-12-05 20:36:07.248990] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:16:13.856 [2024-12-05 20:36:07.248996] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:13.856 [2024-12-05 20:36:07.249000] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:13.856 [2024-12-05 20:36:07.249005] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:16:13.856 [2024-12-05 20:36:07.249009] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:13.856 [2024-12-05 20:36:07.249013] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:16:13.856 [2024-12-05 20:36:07.249017] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:16:13.856 [2024-12-05 20:36:07.249031] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:13.856 [2024-12-05 20:36:07.249041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:13.856 [2024-12-05 20:36:07.249050] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:13.856 [2024-12-05 20:36:07.249064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:13.856 [2024-12-05 20:36:07.249074] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:13.856 [2024-12-05 20:36:07.249085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:13.856 [2024-12-05 20:36:07.249093] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:13.856 [2024-12-05 20:36:07.249104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:13.856 [2024-12-05 20:36:07.249114] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:13.856 [2024-12-05 20:36:07.249118] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:13.856 [2024-12-05 20:36:07.249121] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:13.856 [2024-12-05 20:36:07.249124] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:13.856 [2024-12-05 20:36:07.249126] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:13.856 [2024-12-05 20:36:07.249132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:13.856 [2024-12-05 20:36:07.249137] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:13.856 [2024-12-05 20:36:07.249141] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:13.856 [2024-12-05 20:36:07.249143] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:13.856 [2024-12-05 20:36:07.249148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:13.856 [2024-12-05 20:36:07.249153] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:13.856 [2024-12-05 20:36:07.249157] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:13.856 [2024-12-05 20:36:07.249159] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:13.856 [2024-12-05 20:36:07.249164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:13.856 [2024-12-05 20:36:07.249170] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:13.856 [2024-12-05 20:36:07.249173] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:13.856 [2024-12-05 20:36:07.249177] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:13.856 [2024-12-05 20:36:07.249182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:13.856 [2024-12-05 20:36:07.249188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:13.856 [2024-12-05 20:36:07.249196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:13.856 [2024-12-05 20:36:07.249205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:13.856 [2024-12-05 20:36:07.249210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:13.856 ===================================================== 00:16:13.856 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:13.857 ===================================================== 00:16:13.857 Controller Capabilities/Features 00:16:13.857 ================================ 00:16:13.857 Vendor ID: 4e58 00:16:13.857 Subsystem Vendor ID: 4e58 00:16:13.857 Serial Number: SPDK1 00:16:13.857 Model Number: SPDK bdev Controller 00:16:13.857 Firmware Version: 25.01 00:16:13.857 Recommended Arb Burst: 6 00:16:13.857 IEEE OUI Identifier: 8d 6b 50 00:16:13.857 Multi-path I/O 00:16:13.857 May have multiple subsystem ports: Yes 00:16:13.857 May have multiple controllers: Yes 00:16:13.857 Associated with SR-IOV VF: No 00:16:13.857 Max Data Transfer Size: 131072 00:16:13.857 Max Number of Namespaces: 32 00:16:13.857 Max Number of I/O Queues: 127 00:16:13.857 NVMe Specification Version (VS): 1.3 00:16:13.857 NVMe Specification Version (Identify): 1.3 00:16:13.857 Maximum Queue Entries: 256 00:16:13.857 Contiguous Queues Required: Yes 00:16:13.857 Arbitration Mechanisms Supported 00:16:13.857 Weighted Round Robin: Not Supported 00:16:13.857 Vendor Specific: Not Supported 00:16:13.857 Reset Timeout: 15000 ms 00:16:13.857 Doorbell Stride: 4 bytes 00:16:13.857 NVM Subsystem Reset: Not Supported 00:16:13.857 Command Sets Supported 00:16:13.857 NVM Command Set: Supported 00:16:13.857 Boot Partition: Not Supported 00:16:13.857 Memory Page Size Minimum: 4096 bytes 00:16:13.857 Memory Page Size Maximum: 4096 bytes 00:16:13.857 Persistent Memory Region: Not Supported 00:16:13.857 Optional Asynchronous Events Supported 00:16:13.857 Namespace Attribute Notices: Supported 00:16:13.857 Firmware Activation Notices: Not Supported 00:16:13.857 ANA Change Notices: Not Supported 00:16:13.857 PLE Aggregate Log Change Notices: Not Supported 00:16:13.857 LBA Status Info Alert Notices: Not Supported 00:16:13.857 EGE Aggregate Log Change Notices: Not Supported 00:16:13.857 Normal NVM Subsystem Shutdown event: Not Supported 00:16:13.857 Zone Descriptor Change Notices: Not Supported 00:16:13.857 Discovery Log Change Notices: Not Supported 00:16:13.857 Controller Attributes 00:16:13.857 128-bit Host Identifier: Supported 00:16:13.857 Non-Operational Permissive Mode: Not Supported 00:16:13.857 NVM Sets: Not Supported 00:16:13.857 Read Recovery Levels: Not Supported 00:16:13.857 Endurance Groups: Not Supported 00:16:13.857 Predictable Latency Mode: Not Supported 00:16:13.857 Traffic Based Keep ALive: Not Supported 00:16:13.857 Namespace Granularity: Not Supported 00:16:13.857 SQ Associations: Not Supported 00:16:13.857 UUID List: Not Supported 00:16:13.857 Multi-Domain Subsystem: Not Supported 00:16:13.857 Fixed Capacity Management: Not Supported 00:16:13.857 Variable Capacity Management: Not Supported 00:16:13.857 Delete Endurance Group: Not Supported 00:16:13.857 Delete NVM Set: Not Supported 00:16:13.857 Extended LBA Formats Supported: Not Supported 00:16:13.857 Flexible Data Placement Supported: Not Supported 00:16:13.857 00:16:13.857 Controller Memory Buffer Support 00:16:13.857 ================================ 00:16:13.857 Supported: No 00:16:13.857 00:16:13.857 Persistent Memory Region Support 00:16:13.857 ================================ 00:16:13.857 Supported: No 00:16:13.857 00:16:13.857 Admin Command Set Attributes 00:16:13.857 ============================ 00:16:13.857 Security Send/Receive: Not Supported 00:16:13.857 Format NVM: Not Supported 00:16:13.857 Firmware Activate/Download: Not Supported 00:16:13.857 Namespace Management: Not Supported 00:16:13.857 Device Self-Test: Not Supported 00:16:13.857 Directives: Not Supported 00:16:13.857 NVMe-MI: Not Supported 00:16:13.857 Virtualization Management: Not Supported 00:16:13.857 Doorbell Buffer Config: Not Supported 00:16:13.857 Get LBA Status Capability: Not Supported 00:16:13.857 Command & Feature Lockdown Capability: Not Supported 00:16:13.857 Abort Command Limit: 4 00:16:13.857 Async Event Request Limit: 4 00:16:13.857 Number of Firmware Slots: N/A 00:16:13.857 Firmware Slot 1 Read-Only: N/A 00:16:13.857 Firmware Activation Without Reset: N/A 00:16:13.857 Multiple Update Detection Support: N/A 00:16:13.857 Firmware Update Granularity: No Information Provided 00:16:13.857 Per-Namespace SMART Log: No 00:16:13.857 Asymmetric Namespace Access Log Page: Not Supported 00:16:13.857 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:13.857 Command Effects Log Page: Supported 00:16:13.857 Get Log Page Extended Data: Supported 00:16:13.857 Telemetry Log Pages: Not Supported 00:16:13.857 Persistent Event Log Pages: Not Supported 00:16:13.857 Supported Log Pages Log Page: May Support 00:16:13.857 Commands Supported & Effects Log Page: Not Supported 00:16:13.857 Feature Identifiers & Effects Log Page:May Support 00:16:13.857 NVMe-MI Commands & Effects Log Page: May Support 00:16:13.857 Data Area 4 for Telemetry Log: Not Supported 00:16:13.857 Error Log Page Entries Supported: 128 00:16:13.857 Keep Alive: Supported 00:16:13.857 Keep Alive Granularity: 10000 ms 00:16:13.857 00:16:13.857 NVM Command Set Attributes 00:16:13.857 ========================== 00:16:13.857 Submission Queue Entry Size 00:16:13.857 Max: 64 00:16:13.857 Min: 64 00:16:13.857 Completion Queue Entry Size 00:16:13.857 Max: 16 00:16:13.857 Min: 16 00:16:13.857 Number of Namespaces: 32 00:16:13.857 Compare Command: Supported 00:16:13.857 Write Uncorrectable Command: Not Supported 00:16:13.857 Dataset Management Command: Supported 00:16:13.857 Write Zeroes Command: Supported 00:16:13.857 Set Features Save Field: Not Supported 00:16:13.857 Reservations: Not Supported 00:16:13.857 Timestamp: Not Supported 00:16:13.857 Copy: Supported 00:16:13.857 Volatile Write Cache: Present 00:16:13.857 Atomic Write Unit (Normal): 1 00:16:13.857 Atomic Write Unit (PFail): 1 00:16:13.857 Atomic Compare & Write Unit: 1 00:16:13.857 Fused Compare & Write: Supported 00:16:13.857 Scatter-Gather List 00:16:13.857 SGL Command Set: Supported (Dword aligned) 00:16:13.857 SGL Keyed: Not Supported 00:16:13.857 SGL Bit Bucket Descriptor: Not Supported 00:16:13.857 SGL Metadata Pointer: Not Supported 00:16:13.857 Oversized SGL: Not Supported 00:16:13.857 SGL Metadata Address: Not Supported 00:16:13.857 SGL Offset: Not Supported 00:16:13.857 Transport SGL Data Block: Not Supported 00:16:13.857 Replay Protected Memory Block: Not Supported 00:16:13.857 00:16:13.857 Firmware Slot Information 00:16:13.857 ========================= 00:16:13.857 Active slot: 1 00:16:13.857 Slot 1 Firmware Revision: 25.01 00:16:13.857 00:16:13.857 00:16:13.857 Commands Supported and Effects 00:16:13.857 ============================== 00:16:13.857 Admin Commands 00:16:13.857 -------------- 00:16:13.857 Get Log Page (02h): Supported 00:16:13.857 Identify (06h): Supported 00:16:13.857 Abort (08h): Supported 00:16:13.857 Set Features (09h): Supported 00:16:13.857 Get Features (0Ah): Supported 00:16:13.857 Asynchronous Event Request (0Ch): Supported 00:16:13.857 Keep Alive (18h): Supported 00:16:13.857 I/O Commands 00:16:13.857 ------------ 00:16:13.857 Flush (00h): Supported LBA-Change 00:16:13.857 Write (01h): Supported LBA-Change 00:16:13.857 Read (02h): Supported 00:16:13.857 Compare (05h): Supported 00:16:13.857 Write Zeroes (08h): Supported LBA-Change 00:16:13.857 Dataset Management (09h): Supported LBA-Change 00:16:13.857 Copy (19h): Supported LBA-Change 00:16:13.857 00:16:13.857 Error Log 00:16:13.857 ========= 00:16:13.857 00:16:13.857 Arbitration 00:16:13.857 =========== 00:16:13.857 Arbitration Burst: 1 00:16:13.857 00:16:13.857 Power Management 00:16:13.857 ================ 00:16:13.857 Number of Power States: 1 00:16:13.857 Current Power State: Power State #0 00:16:13.857 Power State #0: 00:16:13.857 Max Power: 0.00 W 00:16:13.857 Non-Operational State: Operational 00:16:13.857 Entry Latency: Not Reported 00:16:13.857 Exit Latency: Not Reported 00:16:13.857 Relative Read Throughput: 0 00:16:13.857 Relative Read Latency: 0 00:16:13.857 Relative Write Throughput: 0 00:16:13.857 Relative Write Latency: 0 00:16:13.857 Idle Power: Not Reported 00:16:13.857 Active Power: Not Reported 00:16:13.857 Non-Operational Permissive Mode: Not Supported 00:16:13.857 00:16:13.857 Health Information 00:16:13.857 ================== 00:16:13.857 Critical Warnings: 00:16:13.857 Available Spare Space: OK 00:16:13.857 Temperature: OK 00:16:13.857 Device Reliability: OK 00:16:13.857 Read Only: No 00:16:13.857 Volatile Memory Backup: OK 00:16:13.857 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:13.857 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:13.857 Available Spare: 0% 00:16:13.857 Available Sp[2024-12-05 20:36:07.249283] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:13.857 [2024-12-05 20:36:07.249291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:13.857 [2024-12-05 20:36:07.249313] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:16:13.858 [2024-12-05 20:36:07.249321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.858 [2024-12-05 20:36:07.249326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.858 [2024-12-05 20:36:07.249331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.858 [2024-12-05 20:36:07.249336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.858 [2024-12-05 20:36:07.249531] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:13.858 [2024-12-05 20:36:07.249539] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:13.858 [2024-12-05 20:36:07.250531] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:13.858 [2024-12-05 20:36:07.250576] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:16:13.858 [2024-12-05 20:36:07.250581] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:16:13.858 [2024-12-05 20:36:07.251541] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:13.858 [2024-12-05 20:36:07.251550] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:16:13.858 [2024-12-05 20:36:07.251595] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:13.858 [2024-12-05 20:36:07.252562] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:13.858 are Threshold: 0% 00:16:13.858 Life Percentage Used: 0% 00:16:13.858 Data Units Read: 0 00:16:13.858 Data Units Written: 0 00:16:13.858 Host Read Commands: 0 00:16:13.858 Host Write Commands: 0 00:16:13.858 Controller Busy Time: 0 minutes 00:16:13.858 Power Cycles: 0 00:16:13.858 Power On Hours: 0 hours 00:16:13.858 Unsafe Shutdowns: 0 00:16:13.858 Unrecoverable Media Errors: 0 00:16:13.858 Lifetime Error Log Entries: 0 00:16:13.858 Warning Temperature Time: 0 minutes 00:16:13.858 Critical Temperature Time: 0 minutes 00:16:13.858 00:16:13.858 Number of Queues 00:16:13.858 ================ 00:16:13.858 Number of I/O Submission Queues: 127 00:16:13.858 Number of I/O Completion Queues: 127 00:16:13.858 00:16:13.858 Active Namespaces 00:16:13.858 ================= 00:16:13.858 Namespace ID:1 00:16:13.858 Error Recovery Timeout: Unlimited 00:16:13.858 Command Set Identifier: NVM (00h) 00:16:13.858 Deallocate: Supported 00:16:13.858 Deallocated/Unwritten Error: Not Supported 00:16:13.858 Deallocated Read Value: Unknown 00:16:13.858 Deallocate in Write Zeroes: Not Supported 00:16:13.858 Deallocated Guard Field: 0xFFFF 00:16:13.858 Flush: Supported 00:16:13.858 Reservation: Supported 00:16:13.858 Namespace Sharing Capabilities: Multiple Controllers 00:16:13.858 Size (in LBAs): 131072 (0GiB) 00:16:13.858 Capacity (in LBAs): 131072 (0GiB) 00:16:13.858 Utilization (in LBAs): 131072 (0GiB) 00:16:13.858 NGUID: E081F51D279D4392A41CA371B1AC2AD9 00:16:13.858 UUID: e081f51d-279d-4392-a41c-a371b1ac2ad9 00:16:13.858 Thin Provisioning: Not Supported 00:16:13.858 Per-NS Atomic Units: Yes 00:16:13.858 Atomic Boundary Size (Normal): 0 00:16:13.858 Atomic Boundary Size (PFail): 0 00:16:13.858 Atomic Boundary Offset: 0 00:16:13.858 Maximum Single Source Range Length: 65535 00:16:13.858 Maximum Copy Length: 65535 00:16:13.858 Maximum Source Range Count: 1 00:16:13.858 NGUID/EUI64 Never Reused: No 00:16:13.858 Namespace Write Protected: No 00:16:13.858 Number of LBA Formats: 1 00:16:13.858 Current LBA Format: LBA Format #00 00:16:13.858 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:13.858 00:16:13.858 20:36:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:14.117 [2024-12-05 20:36:07.468847] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:19.386 Initializing NVMe Controllers 00:16:19.386 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:19.386 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:19.386 Initialization complete. Launching workers. 00:16:19.386 ======================================================== 00:16:19.386 Latency(us) 00:16:19.386 Device Information : IOPS MiB/s Average min max 00:16:19.386 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39985.98 156.20 3201.34 890.86 7702.86 00:16:19.386 ======================================================== 00:16:19.386 Total : 39985.98 156.20 3201.34 890.86 7702.86 00:16:19.386 00:16:19.386 [2024-12-05 20:36:12.490520] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:19.386 20:36:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:19.386 [2024-12-05 20:36:12.709500] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:24.663 Initializing NVMe Controllers 00:16:24.663 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:24.663 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:24.663 Initialization complete. Launching workers. 00:16:24.663 ======================================================== 00:16:24.663 Latency(us) 00:16:24.663 Device Information : IOPS MiB/s Average min max 00:16:24.663 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15983.88 62.44 8013.50 4986.25 15964.40 00:16:24.663 ======================================================== 00:16:24.663 Total : 15983.88 62.44 8013.50 4986.25 15964.40 00:16:24.663 00:16:24.663 [2024-12-05 20:36:17.749380] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:24.663 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:24.663 [2024-12-05 20:36:17.947271] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:29.936 [2024-12-05 20:36:23.035491] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:29.936 Initializing NVMe Controllers 00:16:29.936 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:29.936 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:29.936 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:29.936 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:29.936 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:29.936 Initialization complete. Launching workers. 00:16:29.936 Starting thread on core 2 00:16:29.936 Starting thread on core 3 00:16:29.936 Starting thread on core 1 00:16:29.936 20:36:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:29.936 [2024-12-05 20:36:23.312709] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:33.272 [2024-12-05 20:36:26.386603] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:33.272 Initializing NVMe Controllers 00:16:33.272 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:33.272 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:33.272 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:33.272 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:33.272 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:33.272 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:33.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:33.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:33.272 Initialization complete. Launching workers. 00:16:33.272 Starting thread on core 1 with urgent priority queue 00:16:33.272 Starting thread on core 2 with urgent priority queue 00:16:33.272 Starting thread on core 3 with urgent priority queue 00:16:33.272 Starting thread on core 0 with urgent priority queue 00:16:33.272 SPDK bdev Controller (SPDK1 ) core 0: 6957.33 IO/s 14.37 secs/100000 ios 00:16:33.272 SPDK bdev Controller (SPDK1 ) core 1: 6959.67 IO/s 14.37 secs/100000 ios 00:16:33.272 SPDK bdev Controller (SPDK1 ) core 2: 8646.00 IO/s 11.57 secs/100000 ios 00:16:33.272 SPDK bdev Controller (SPDK1 ) core 3: 7118.33 IO/s 14.05 secs/100000 ios 00:16:33.272 ======================================================== 00:16:33.272 00:16:33.272 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:33.272 [2024-12-05 20:36:26.664475] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:33.272 Initializing NVMe Controllers 00:16:33.272 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:33.272 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:33.272 Namespace ID: 1 size: 0GB 00:16:33.272 Initialization complete. 00:16:33.272 INFO: using host memory buffer for IO 00:16:33.272 Hello world! 00:16:33.272 [2024-12-05 20:36:26.697694] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:33.529 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:33.529 [2024-12-05 20:36:26.966425] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:34.905 Initializing NVMe Controllers 00:16:34.905 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:34.905 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:34.905 Initialization complete. Launching workers. 00:16:34.905 submit (in ns) avg, min, max = 5254.9, 2965.5, 3997090.9 00:16:34.905 complete (in ns) avg, min, max = 20885.3, 1593.6, 4992605.5 00:16:34.905 00:16:34.905 Submit histogram 00:16:34.905 ================ 00:16:34.905 Range in us Cumulative Count 00:16:34.905 2.953 - 2.967: 0.0057% ( 1) 00:16:34.905 2.967 - 2.982: 0.0396% ( 6) 00:16:34.905 2.982 - 2.996: 0.1359% ( 17) 00:16:34.905 2.996 - 3.011: 0.3171% ( 32) 00:16:34.905 3.011 - 3.025: 0.8493% ( 94) 00:16:34.905 3.025 - 3.040: 2.7122% ( 329) 00:16:34.905 3.040 - 3.055: 6.6248% ( 691) 00:16:34.905 3.055 - 3.069: 11.6868% ( 894) 00:16:34.905 3.069 - 3.084: 17.2527% ( 983) 00:16:34.905 3.084 - 3.098: 23.2433% ( 1058) 00:16:34.905 3.098 - 3.113: 28.8998% ( 999) 00:16:34.905 3.113 - 3.127: 32.3934% ( 617) 00:16:34.905 3.127 - 3.142: 34.7092% ( 409) 00:16:34.905 3.142 - 3.156: 37.2063% ( 441) 00:16:34.905 3.156 - 3.171: 39.5051% ( 406) 00:16:34.905 3.171 - 3.185: 41.8493% ( 414) 00:16:34.905 3.185 - 3.200: 44.4652% ( 462) 00:16:34.905 3.200 - 3.215: 48.3665% ( 689) 00:16:34.905 3.215 - 3.229: 55.1158% ( 1192) 00:16:34.905 3.229 - 3.244: 62.0916% ( 1232) 00:16:34.905 3.244 - 3.258: 69.1354% ( 1244) 00:16:34.905 3.258 - 3.273: 74.7749% ( 996) 00:16:34.905 3.273 - 3.287: 79.6614% ( 863) 00:16:34.905 3.287 - 3.302: 83.5796% ( 692) 00:16:34.905 3.302 - 3.316: 86.2465% ( 471) 00:16:34.905 3.316 - 3.331: 87.5375% ( 228) 00:16:34.905 3.331 - 3.345: 88.1207% ( 103) 00:16:34.905 3.345 - 3.360: 88.5057% ( 68) 00:16:34.905 3.360 - 3.375: 89.0493% ( 96) 00:16:34.905 3.375 - 3.389: 89.6495% ( 106) 00:16:34.905 3.389 - 3.404: 90.2837% ( 112) 00:16:34.905 3.404 - 3.418: 90.9631% ( 120) 00:16:34.905 3.418 - 3.433: 91.6143% ( 115) 00:16:34.905 3.433 - 3.447: 92.2201% ( 107) 00:16:34.905 3.447 - 3.462: 92.6901% ( 83) 00:16:34.905 3.462 - 3.476: 93.2054% ( 91) 00:16:34.905 3.476 - 3.491: 93.9528% ( 132) 00:16:34.905 3.491 - 3.505: 94.8248% ( 154) 00:16:34.905 3.505 - 3.520: 95.6231% ( 141) 00:16:34.905 3.520 - 3.535: 96.4498% ( 146) 00:16:34.905 3.535 - 3.549: 97.2255% ( 137) 00:16:34.905 3.549 - 3.564: 97.8880% ( 117) 00:16:34.905 3.564 - 3.578: 98.3580% ( 83) 00:16:34.905 3.578 - 3.593: 98.6524% ( 52) 00:16:34.905 3.593 - 3.607: 99.0091% ( 63) 00:16:34.905 3.607 - 3.622: 99.2696% ( 46) 00:16:34.905 3.622 - 3.636: 99.4394% ( 30) 00:16:34.905 3.636 - 3.651: 99.5697% ( 23) 00:16:34.905 3.651 - 3.665: 99.6263% ( 10) 00:16:34.905 3.665 - 3.680: 99.6546% ( 5) 00:16:34.905 3.680 - 3.695: 99.6829% ( 5) 00:16:34.905 3.724 - 3.753: 99.6886% ( 1) 00:16:34.905 3.782 - 3.811: 99.6942% ( 1) 00:16:34.905 4.364 - 4.393: 99.6999% ( 1) 00:16:34.905 4.625 - 4.655: 99.7056% ( 1) 00:16:34.905 4.713 - 4.742: 99.7112% ( 1) 00:16:34.905 4.742 - 4.771: 99.7169% ( 1) 00:16:34.905 4.800 - 4.829: 99.7226% ( 1) 00:16:34.905 4.829 - 4.858: 99.7395% ( 3) 00:16:34.905 4.858 - 4.887: 99.7452% ( 1) 00:16:34.905 4.975 - 5.004: 99.7509% ( 1) 00:16:34.905 5.004 - 5.033: 99.7622% ( 2) 00:16:34.905 5.062 - 5.091: 99.7679% ( 1) 00:16:34.905 5.091 - 5.120: 99.7735% ( 1) 00:16:34.905 5.120 - 5.149: 99.7792% ( 1) 00:16:34.905 5.149 - 5.178: 99.7905% ( 2) 00:16:34.905 5.178 - 5.207: 99.7962% ( 1) 00:16:34.905 5.207 - 5.236: 99.8075% ( 2) 00:16:34.905 5.295 - 5.324: 99.8188% ( 2) 00:16:34.905 5.324 - 5.353: 99.8245% ( 1) 00:16:34.905 5.411 - 5.440: 99.8415% ( 3) 00:16:34.905 5.440 - 5.469: 99.8471% ( 1) 00:16:34.905 5.498 - 5.527: 99.8528% ( 1) 00:16:34.905 5.527 - 5.556: 99.8584% ( 1) 00:16:34.905 5.644 - 5.673: 99.8641% ( 1) 00:16:34.905 5.702 - 5.731: 99.8698% ( 1) 00:16:34.905 5.731 - 5.760: 99.8754% ( 1) 00:16:34.905 5.818 - 5.847: 99.8811% ( 1) 00:16:34.905 5.847 - 5.876: 99.8868% ( 1) 00:16:34.905 [2024-12-05 20:36:27.986131] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:34.905 5.964 - 5.993: 99.8924% ( 1) 00:16:34.905 5.993 - 6.022: 99.8981% ( 1) 00:16:34.905 6.138 - 6.167: 99.9037% ( 1) 00:16:34.905 6.255 - 6.284: 99.9094% ( 1) 00:16:34.905 6.487 - 6.516: 99.9151% ( 1) 00:16:34.905 6.807 - 6.836: 99.9207% ( 1) 00:16:34.905 6.895 - 6.924: 99.9264% ( 1) 00:16:34.905 6.924 - 6.953: 99.9321% ( 1) 00:16:34.905 7.796 - 7.855: 99.9377% ( 1) 00:16:34.905 9.833 - 9.891: 99.9434% ( 1) 00:16:34.905 11.055 - 11.113: 99.9490% ( 1) 00:16:34.905 3991.738 - 4021.527: 100.0000% ( 9) 00:16:34.905 00:16:34.905 Complete histogram 00:16:34.905 ================== 00:16:34.905 Range in us Cumulative Count 00:16:34.905 1.593 - 1.600: 0.0057% ( 1) 00:16:34.905 1.615 - 1.622: 0.0170% ( 2) 00:16:34.905 1.622 - 1.629: 0.2038% ( 33) 00:16:34.905 1.629 - 1.636: 0.8040% ( 106) 00:16:34.905 1.636 - 1.644: 1.5798% ( 137) 00:16:34.905 1.644 - 1.651: 2.0157% ( 77) 00:16:34.905 1.651 - 1.658: 2.3498% ( 59) 00:16:34.905 1.658 - 1.665: 2.5253% ( 31) 00:16:34.905 1.665 - 1.673: 4.0994% ( 278) 00:16:34.905 1.673 - 1.680: 18.7192% ( 2582) 00:16:34.905 1.680 - 1.687: 52.0412% ( 5885) 00:16:34.905 1.687 - 1.695: 76.5302% ( 4325) 00:16:34.905 1.695 - 1.702: 86.3315% ( 1731) 00:16:34.905 1.702 - 1.709: 90.8556% ( 799) 00:16:34.905 1.709 - 1.716: 93.7772% ( 516) 00:16:34.906 1.716 - 1.724: 95.0512% ( 225) 00:16:34.906 1.724 - 1.731: 95.4363% ( 68) 00:16:34.906 1.731 - 1.738: 95.6288% ( 34) 00:16:34.906 1.738 - 1.745: 95.9912% ( 64) 00:16:34.906 1.745 - 1.753: 96.8292% ( 148) 00:16:34.906 1.753 - 1.760: 97.8427% ( 179) 00:16:34.906 1.760 - 1.767: 98.5561% ( 126) 00:16:34.906 1.767 - 1.775: 98.8902% ( 59) 00:16:34.906 1.775 - 1.782: 99.1224% ( 41) 00:16:34.906 1.782 - 1.789: 99.2809% ( 28) 00:16:34.906 1.789 - 1.796: 99.2979% ( 3) 00:16:34.906 1.796 - 1.804: 99.3092% ( 2) 00:16:34.906 1.804 - 1.811: 99.3205% ( 2) 00:16:34.906 1.825 - 1.833: 99.3262% ( 1) 00:16:34.906 1.833 - 1.840: 99.3319% ( 1) 00:16:34.906 1.862 - 1.876: 99.3432% ( 2) 00:16:34.906 1.891 - 1.905: 99.3488% ( 1) 00:16:34.906 1.964 - 1.978: 99.3545% ( 1) 00:16:34.906 2.109 - 2.124: 99.3602% ( 1) 00:16:34.906 3.200 - 3.215: 99.3658% ( 1) 00:16:34.906 3.215 - 3.229: 99.3715% ( 1) 00:16:34.906 3.244 - 3.258: 99.3772% ( 1) 00:16:34.906 3.331 - 3.345: 99.3828% ( 1) 00:16:34.906 3.404 - 3.418: 99.3885% ( 1) 00:16:34.906 3.418 - 3.433: 99.3941% ( 1) 00:16:34.906 3.462 - 3.476: 99.3998% ( 1) 00:16:34.906 3.491 - 3.505: 99.4055% ( 1) 00:16:34.906 3.636 - 3.651: 99.4111% ( 1) 00:16:34.906 3.651 - 3.665: 99.4168% ( 1) 00:16:34.906 3.665 - 3.680: 99.4225% ( 1) 00:16:34.906 3.724 - 3.753: 99.4281% ( 1) 00:16:34.906 3.782 - 3.811: 99.4338% ( 1) 00:16:34.906 3.869 - 3.898: 99.4394% ( 1) 00:16:34.906 3.898 - 3.927: 99.4451% ( 1) 00:16:34.906 3.927 - 3.956: 99.4508% ( 1) 00:16:34.906 4.131 - 4.160: 99.4564% ( 1) 00:16:34.906 4.160 - 4.189: 99.4621% ( 1) 00:16:34.906 4.567 - 4.596: 99.4678% ( 1) 00:16:34.906 4.625 - 4.655: 99.4734% ( 1) 00:16:34.906 4.684 - 4.713: 99.4791% ( 1) 00:16:34.906 4.829 - 4.858: 99.4847% ( 1) 00:16:34.906 5.120 - 5.149: 99.4904% ( 1) 00:16:34.906 5.207 - 5.236: 99.4961% ( 1) 00:16:34.906 5.818 - 5.847: 99.5017% ( 1) 00:16:34.906 6.196 - 6.225: 99.5074% ( 1) 00:16:34.906 8.553 - 8.611: 99.5131% ( 1) 00:16:34.906 36.771 - 37.004: 99.5187% ( 1) 00:16:34.906 2442.705 - 2457.600: 99.5244% ( 1) 00:16:34.906 3991.738 - 4021.527: 99.9943% ( 83) 00:16:34.906 4974.778 - 5004.567: 100.0000% ( 1) 00:16:34.906 00:16:34.906 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:34.906 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:34.906 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:34.906 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:34.906 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:34.906 [ 00:16:34.906 { 00:16:34.906 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:34.906 "subtype": "Discovery", 00:16:34.906 "listen_addresses": [], 00:16:34.906 "allow_any_host": true, 00:16:34.906 "hosts": [] 00:16:34.906 }, 00:16:34.906 { 00:16:34.906 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:34.906 "subtype": "NVMe", 00:16:34.906 "listen_addresses": [ 00:16:34.906 { 00:16:34.906 "trtype": "VFIOUSER", 00:16:34.906 "adrfam": "IPv4", 00:16:34.906 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:34.906 "trsvcid": "0" 00:16:34.906 } 00:16:34.906 ], 00:16:34.906 "allow_any_host": true, 00:16:34.906 "hosts": [], 00:16:34.906 "serial_number": "SPDK1", 00:16:34.906 "model_number": "SPDK bdev Controller", 00:16:34.906 "max_namespaces": 32, 00:16:34.906 "min_cntlid": 1, 00:16:34.906 "max_cntlid": 65519, 00:16:34.906 "namespaces": [ 00:16:34.906 { 00:16:34.906 "nsid": 1, 00:16:34.906 "bdev_name": "Malloc1", 00:16:34.906 "name": "Malloc1", 00:16:34.906 "nguid": "E081F51D279D4392A41CA371B1AC2AD9", 00:16:34.906 "uuid": "e081f51d-279d-4392-a41c-a371b1ac2ad9" 00:16:34.906 } 00:16:34.906 ] 00:16:34.906 }, 00:16:34.906 { 00:16:34.906 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:34.906 "subtype": "NVMe", 00:16:34.906 "listen_addresses": [ 00:16:34.906 { 00:16:34.906 "trtype": "VFIOUSER", 00:16:34.906 "adrfam": "IPv4", 00:16:34.906 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:34.906 "trsvcid": "0" 00:16:34.906 } 00:16:34.906 ], 00:16:34.906 "allow_any_host": true, 00:16:34.906 "hosts": [], 00:16:34.906 "serial_number": "SPDK2", 00:16:34.906 "model_number": "SPDK bdev Controller", 00:16:34.906 "max_namespaces": 32, 00:16:34.906 "min_cntlid": 1, 00:16:34.906 "max_cntlid": 65519, 00:16:34.906 "namespaces": [ 00:16:34.906 { 00:16:34.906 "nsid": 1, 00:16:34.906 "bdev_name": "Malloc2", 00:16:34.906 "name": "Malloc2", 00:16:34.906 "nguid": "18EC3B84907E48FE97A434E4B1EB026A", 00:16:34.906 "uuid": "18ec3b84-907e-48fe-97a4-34e4b1eb026a" 00:16:34.906 } 00:16:34.906 ] 00:16:34.906 } 00:16:34.906 ] 00:16:34.906 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:34.906 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=328670 00:16:34.906 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:34.906 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:34.906 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:34.906 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:34.906 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:16:34.906 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:16:34.906 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:16:34.906 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:34.906 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:16:34.906 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:16:34.906 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:16:35.164 [2024-12-05 20:36:28.369434] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:35.164 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:35.164 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:35.164 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:35.164 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:35.164 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:35.443 Malloc3 00:16:35.443 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:35.443 [2024-12-05 20:36:28.795449] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:35.443 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:35.443 Asynchronous Event Request test 00:16:35.443 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:35.443 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:35.443 Registering asynchronous event callbacks... 00:16:35.443 Starting namespace attribute notice tests for all controllers... 00:16:35.443 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:35.443 aer_cb - Changed Namespace 00:16:35.443 Cleaning up... 00:16:35.704 [ 00:16:35.704 { 00:16:35.704 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:35.704 "subtype": "Discovery", 00:16:35.704 "listen_addresses": [], 00:16:35.704 "allow_any_host": true, 00:16:35.704 "hosts": [] 00:16:35.704 }, 00:16:35.704 { 00:16:35.704 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:35.705 "subtype": "NVMe", 00:16:35.705 "listen_addresses": [ 00:16:35.705 { 00:16:35.705 "trtype": "VFIOUSER", 00:16:35.705 "adrfam": "IPv4", 00:16:35.705 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:35.705 "trsvcid": "0" 00:16:35.705 } 00:16:35.705 ], 00:16:35.705 "allow_any_host": true, 00:16:35.705 "hosts": [], 00:16:35.705 "serial_number": "SPDK1", 00:16:35.705 "model_number": "SPDK bdev Controller", 00:16:35.705 "max_namespaces": 32, 00:16:35.705 "min_cntlid": 1, 00:16:35.705 "max_cntlid": 65519, 00:16:35.705 "namespaces": [ 00:16:35.705 { 00:16:35.705 "nsid": 1, 00:16:35.705 "bdev_name": "Malloc1", 00:16:35.705 "name": "Malloc1", 00:16:35.705 "nguid": "E081F51D279D4392A41CA371B1AC2AD9", 00:16:35.705 "uuid": "e081f51d-279d-4392-a41c-a371b1ac2ad9" 00:16:35.705 }, 00:16:35.705 { 00:16:35.705 "nsid": 2, 00:16:35.705 "bdev_name": "Malloc3", 00:16:35.705 "name": "Malloc3", 00:16:35.705 "nguid": "1E199CB3AF414C5EA2F3D9D4BF7470D4", 00:16:35.705 "uuid": "1e199cb3-af41-4c5e-a2f3-d9d4bf7470d4" 00:16:35.705 } 00:16:35.705 ] 00:16:35.705 }, 00:16:35.705 { 00:16:35.705 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:35.705 "subtype": "NVMe", 00:16:35.705 "listen_addresses": [ 00:16:35.705 { 00:16:35.705 "trtype": "VFIOUSER", 00:16:35.705 "adrfam": "IPv4", 00:16:35.705 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:35.705 "trsvcid": "0" 00:16:35.705 } 00:16:35.705 ], 00:16:35.705 "allow_any_host": true, 00:16:35.705 "hosts": [], 00:16:35.705 "serial_number": "SPDK2", 00:16:35.705 "model_number": "SPDK bdev Controller", 00:16:35.705 "max_namespaces": 32, 00:16:35.705 "min_cntlid": 1, 00:16:35.705 "max_cntlid": 65519, 00:16:35.705 "namespaces": [ 00:16:35.705 { 00:16:35.705 "nsid": 1, 00:16:35.705 "bdev_name": "Malloc2", 00:16:35.705 "name": "Malloc2", 00:16:35.705 "nguid": "18EC3B84907E48FE97A434E4B1EB026A", 00:16:35.705 "uuid": "18ec3b84-907e-48fe-97a4-34e4b1eb026a" 00:16:35.705 } 00:16:35.705 ] 00:16:35.705 } 00:16:35.705 ] 00:16:35.705 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 328670 00:16:35.705 20:36:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:35.705 20:36:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:35.705 20:36:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:35.705 20:36:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:35.705 [2024-12-05 20:36:29.012338] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:16:35.705 [2024-12-05 20:36:29.012364] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid328837 ] 00:16:35.705 [2024-12-05 20:36:29.049178] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:35.705 [2024-12-05 20:36:29.057303] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:35.705 [2024-12-05 20:36:29.057325] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2e483e9000 00:16:35.705 [2024-12-05 20:36:29.058307] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:35.705 [2024-12-05 20:36:29.059315] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:35.705 [2024-12-05 20:36:29.060325] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:35.705 [2024-12-05 20:36:29.061328] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:35.705 [2024-12-05 20:36:29.062332] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:35.705 [2024-12-05 20:36:29.063342] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:35.705 [2024-12-05 20:36:29.064348] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:35.705 [2024-12-05 20:36:29.065360] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:35.705 [2024-12-05 20:36:29.066366] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:35.705 [2024-12-05 20:36:29.066376] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2e483de000 00:16:35.705 [2024-12-05 20:36:29.067216] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:35.705 [2024-12-05 20:36:29.080139] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:35.705 [2024-12-05 20:36:29.080166] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:16:35.705 [2024-12-05 20:36:29.082212] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:35.705 [2024-12-05 20:36:29.082245] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:35.705 [2024-12-05 20:36:29.082309] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:16:35.705 [2024-12-05 20:36:29.082323] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:16:35.705 [2024-12-05 20:36:29.082328] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:16:35.705 [2024-12-05 20:36:29.083218] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:35.705 [2024-12-05 20:36:29.083227] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:16:35.705 [2024-12-05 20:36:29.083233] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:16:35.705 [2024-12-05 20:36:29.084226] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:35.705 [2024-12-05 20:36:29.084234] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:16:35.705 [2024-12-05 20:36:29.084240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:35.705 [2024-12-05 20:36:29.085233] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:35.705 [2024-12-05 20:36:29.085241] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:35.705 [2024-12-05 20:36:29.086234] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:35.705 [2024-12-05 20:36:29.086242] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:35.705 [2024-12-05 20:36:29.086247] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:35.705 [2024-12-05 20:36:29.086253] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:35.705 [2024-12-05 20:36:29.086359] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:16:35.705 [2024-12-05 20:36:29.086363] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:35.705 [2024-12-05 20:36:29.086368] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:35.705 [2024-12-05 20:36:29.087249] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:35.705 [2024-12-05 20:36:29.088254] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:35.705 [2024-12-05 20:36:29.089256] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:35.705 [2024-12-05 20:36:29.090264] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:35.706 [2024-12-05 20:36:29.090301] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:35.706 [2024-12-05 20:36:29.091274] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:35.706 [2024-12-05 20:36:29.091282] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:35.706 [2024-12-05 20:36:29.091287] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:35.706 [2024-12-05 20:36:29.091302] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:16:35.706 [2024-12-05 20:36:29.091313] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:35.706 [2024-12-05 20:36:29.091326] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:35.706 [2024-12-05 20:36:29.091330] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:35.706 [2024-12-05 20:36:29.091333] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:35.706 [2024-12-05 20:36:29.091345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:35.706 [2024-12-05 20:36:29.099066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:35.706 [2024-12-05 20:36:29.099076] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:16:35.706 [2024-12-05 20:36:29.099080] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:16:35.706 [2024-12-05 20:36:29.099086] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:16:35.706 [2024-12-05 20:36:29.099090] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:35.706 [2024-12-05 20:36:29.099094] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:16:35.706 [2024-12-05 20:36:29.099098] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:16:35.706 [2024-12-05 20:36:29.099102] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:16:35.706 [2024-12-05 20:36:29.099108] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:35.706 [2024-12-05 20:36:29.099117] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:35.706 [2024-12-05 20:36:29.107063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:35.706 [2024-12-05 20:36:29.107074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.706 [2024-12-05 20:36:29.107081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.706 [2024-12-05 20:36:29.107088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.706 [2024-12-05 20:36:29.107094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.706 [2024-12-05 20:36:29.107098] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:35.706 [2024-12-05 20:36:29.107106] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:35.706 [2024-12-05 20:36:29.107114] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:35.706 [2024-12-05 20:36:29.115063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:35.706 [2024-12-05 20:36:29.115069] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:16:35.706 [2024-12-05 20:36:29.115074] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:35.706 [2024-12-05 20:36:29.115079] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:16:35.706 [2024-12-05 20:36:29.115084] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:35.706 [2024-12-05 20:36:29.115095] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:35.706 [2024-12-05 20:36:29.123064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:35.706 [2024-12-05 20:36:29.123114] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:16:35.706 [2024-12-05 20:36:29.123121] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:35.706 [2024-12-05 20:36:29.123127] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:35.706 [2024-12-05 20:36:29.123131] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:35.706 [2024-12-05 20:36:29.123133] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:35.706 [2024-12-05 20:36:29.123139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:35.706 [2024-12-05 20:36:29.131062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:35.706 [2024-12-05 20:36:29.131071] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:16:35.706 [2024-12-05 20:36:29.131079] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:16:35.706 [2024-12-05 20:36:29.131085] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:35.706 [2024-12-05 20:36:29.131091] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:35.706 [2024-12-05 20:36:29.131094] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:35.706 [2024-12-05 20:36:29.131097] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:35.706 [2024-12-05 20:36:29.131102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:35.706 [2024-12-05 20:36:29.139063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:35.706 [2024-12-05 20:36:29.139076] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:35.706 [2024-12-05 20:36:29.139083] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:35.706 [2024-12-05 20:36:29.139089] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:35.706 [2024-12-05 20:36:29.139093] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:35.706 [2024-12-05 20:36:29.139096] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:35.706 [2024-12-05 20:36:29.139101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:35.967 [2024-12-05 20:36:29.147062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:35.967 [2024-12-05 20:36:29.147071] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:35.967 [2024-12-05 20:36:29.147076] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:35.967 [2024-12-05 20:36:29.147085] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:16:35.967 [2024-12-05 20:36:29.147092] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:35.967 [2024-12-05 20:36:29.147096] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:35.967 [2024-12-05 20:36:29.147101] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:16:35.967 [2024-12-05 20:36:29.147105] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:35.967 [2024-12-05 20:36:29.147109] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:16:35.967 [2024-12-05 20:36:29.147113] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:16:35.967 [2024-12-05 20:36:29.147127] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:35.967 [2024-12-05 20:36:29.155062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:35.968 [2024-12-05 20:36:29.155074] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:35.968 [2024-12-05 20:36:29.163062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:35.968 [2024-12-05 20:36:29.163073] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:35.968 [2024-12-05 20:36:29.171063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:35.968 [2024-12-05 20:36:29.171081] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:35.968 [2024-12-05 20:36:29.179062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:35.968 [2024-12-05 20:36:29.179077] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:35.968 [2024-12-05 20:36:29.179081] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:35.968 [2024-12-05 20:36:29.179084] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:35.968 [2024-12-05 20:36:29.179087] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:35.968 [2024-12-05 20:36:29.179090] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:35.968 [2024-12-05 20:36:29.179095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:35.968 [2024-12-05 20:36:29.179101] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:35.968 [2024-12-05 20:36:29.179105] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:35.968 [2024-12-05 20:36:29.179108] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:35.968 [2024-12-05 20:36:29.179113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:35.968 [2024-12-05 20:36:29.179118] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:35.968 [2024-12-05 20:36:29.179122] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:35.968 [2024-12-05 20:36:29.179125] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:35.968 [2024-12-05 20:36:29.179131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:35.968 [2024-12-05 20:36:29.179137] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:35.968 [2024-12-05 20:36:29.179141] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:35.968 [2024-12-05 20:36:29.179144] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:35.968 [2024-12-05 20:36:29.179149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:35.968 [2024-12-05 20:36:29.187063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:35.968 [2024-12-05 20:36:29.187076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:35.968 [2024-12-05 20:36:29.187085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:35.968 [2024-12-05 20:36:29.187091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:35.968 ===================================================== 00:16:35.968 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:35.968 ===================================================== 00:16:35.968 Controller Capabilities/Features 00:16:35.968 ================================ 00:16:35.968 Vendor ID: 4e58 00:16:35.968 Subsystem Vendor ID: 4e58 00:16:35.968 Serial Number: SPDK2 00:16:35.968 Model Number: SPDK bdev Controller 00:16:35.968 Firmware Version: 25.01 00:16:35.968 Recommended Arb Burst: 6 00:16:35.968 IEEE OUI Identifier: 8d 6b 50 00:16:35.968 Multi-path I/O 00:16:35.968 May have multiple subsystem ports: Yes 00:16:35.968 May have multiple controllers: Yes 00:16:35.968 Associated with SR-IOV VF: No 00:16:35.968 Max Data Transfer Size: 131072 00:16:35.968 Max Number of Namespaces: 32 00:16:35.968 Max Number of I/O Queues: 127 00:16:35.968 NVMe Specification Version (VS): 1.3 00:16:35.968 NVMe Specification Version (Identify): 1.3 00:16:35.968 Maximum Queue Entries: 256 00:16:35.968 Contiguous Queues Required: Yes 00:16:35.968 Arbitration Mechanisms Supported 00:16:35.968 Weighted Round Robin: Not Supported 00:16:35.968 Vendor Specific: Not Supported 00:16:35.968 Reset Timeout: 15000 ms 00:16:35.968 Doorbell Stride: 4 bytes 00:16:35.968 NVM Subsystem Reset: Not Supported 00:16:35.968 Command Sets Supported 00:16:35.968 NVM Command Set: Supported 00:16:35.968 Boot Partition: Not Supported 00:16:35.968 Memory Page Size Minimum: 4096 bytes 00:16:35.968 Memory Page Size Maximum: 4096 bytes 00:16:35.968 Persistent Memory Region: Not Supported 00:16:35.968 Optional Asynchronous Events Supported 00:16:35.968 Namespace Attribute Notices: Supported 00:16:35.968 Firmware Activation Notices: Not Supported 00:16:35.968 ANA Change Notices: Not Supported 00:16:35.968 PLE Aggregate Log Change Notices: Not Supported 00:16:35.968 LBA Status Info Alert Notices: Not Supported 00:16:35.968 EGE Aggregate Log Change Notices: Not Supported 00:16:35.968 Normal NVM Subsystem Shutdown event: Not Supported 00:16:35.968 Zone Descriptor Change Notices: Not Supported 00:16:35.968 Discovery Log Change Notices: Not Supported 00:16:35.968 Controller Attributes 00:16:35.968 128-bit Host Identifier: Supported 00:16:35.968 Non-Operational Permissive Mode: Not Supported 00:16:35.968 NVM Sets: Not Supported 00:16:35.968 Read Recovery Levels: Not Supported 00:16:35.968 Endurance Groups: Not Supported 00:16:35.968 Predictable Latency Mode: Not Supported 00:16:35.968 Traffic Based Keep ALive: Not Supported 00:16:35.968 Namespace Granularity: Not Supported 00:16:35.968 SQ Associations: Not Supported 00:16:35.968 UUID List: Not Supported 00:16:35.968 Multi-Domain Subsystem: Not Supported 00:16:35.968 Fixed Capacity Management: Not Supported 00:16:35.968 Variable Capacity Management: Not Supported 00:16:35.968 Delete Endurance Group: Not Supported 00:16:35.968 Delete NVM Set: Not Supported 00:16:35.968 Extended LBA Formats Supported: Not Supported 00:16:35.968 Flexible Data Placement Supported: Not Supported 00:16:35.968 00:16:35.968 Controller Memory Buffer Support 00:16:35.968 ================================ 00:16:35.968 Supported: No 00:16:35.968 00:16:35.968 Persistent Memory Region Support 00:16:35.968 ================================ 00:16:35.968 Supported: No 00:16:35.968 00:16:35.968 Admin Command Set Attributes 00:16:35.968 ============================ 00:16:35.968 Security Send/Receive: Not Supported 00:16:35.968 Format NVM: Not Supported 00:16:35.968 Firmware Activate/Download: Not Supported 00:16:35.968 Namespace Management: Not Supported 00:16:35.968 Device Self-Test: Not Supported 00:16:35.968 Directives: Not Supported 00:16:35.968 NVMe-MI: Not Supported 00:16:35.968 Virtualization Management: Not Supported 00:16:35.968 Doorbell Buffer Config: Not Supported 00:16:35.968 Get LBA Status Capability: Not Supported 00:16:35.968 Command & Feature Lockdown Capability: Not Supported 00:16:35.968 Abort Command Limit: 4 00:16:35.968 Async Event Request Limit: 4 00:16:35.968 Number of Firmware Slots: N/A 00:16:35.968 Firmware Slot 1 Read-Only: N/A 00:16:35.968 Firmware Activation Without Reset: N/A 00:16:35.968 Multiple Update Detection Support: N/A 00:16:35.968 Firmware Update Granularity: No Information Provided 00:16:35.968 Per-Namespace SMART Log: No 00:16:35.968 Asymmetric Namespace Access Log Page: Not Supported 00:16:35.968 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:35.968 Command Effects Log Page: Supported 00:16:35.968 Get Log Page Extended Data: Supported 00:16:35.968 Telemetry Log Pages: Not Supported 00:16:35.968 Persistent Event Log Pages: Not Supported 00:16:35.968 Supported Log Pages Log Page: May Support 00:16:35.968 Commands Supported & Effects Log Page: Not Supported 00:16:35.968 Feature Identifiers & Effects Log Page:May Support 00:16:35.968 NVMe-MI Commands & Effects Log Page: May Support 00:16:35.968 Data Area 4 for Telemetry Log: Not Supported 00:16:35.968 Error Log Page Entries Supported: 128 00:16:35.968 Keep Alive: Supported 00:16:35.968 Keep Alive Granularity: 10000 ms 00:16:35.968 00:16:35.968 NVM Command Set Attributes 00:16:35.968 ========================== 00:16:35.968 Submission Queue Entry Size 00:16:35.968 Max: 64 00:16:35.968 Min: 64 00:16:35.968 Completion Queue Entry Size 00:16:35.968 Max: 16 00:16:35.968 Min: 16 00:16:35.968 Number of Namespaces: 32 00:16:35.968 Compare Command: Supported 00:16:35.968 Write Uncorrectable Command: Not Supported 00:16:35.968 Dataset Management Command: Supported 00:16:35.968 Write Zeroes Command: Supported 00:16:35.968 Set Features Save Field: Not Supported 00:16:35.968 Reservations: Not Supported 00:16:35.968 Timestamp: Not Supported 00:16:35.968 Copy: Supported 00:16:35.968 Volatile Write Cache: Present 00:16:35.968 Atomic Write Unit (Normal): 1 00:16:35.968 Atomic Write Unit (PFail): 1 00:16:35.968 Atomic Compare & Write Unit: 1 00:16:35.968 Fused Compare & Write: Supported 00:16:35.968 Scatter-Gather List 00:16:35.969 SGL Command Set: Supported (Dword aligned) 00:16:35.969 SGL Keyed: Not Supported 00:16:35.969 SGL Bit Bucket Descriptor: Not Supported 00:16:35.969 SGL Metadata Pointer: Not Supported 00:16:35.969 Oversized SGL: Not Supported 00:16:35.969 SGL Metadata Address: Not Supported 00:16:35.969 SGL Offset: Not Supported 00:16:35.969 Transport SGL Data Block: Not Supported 00:16:35.969 Replay Protected Memory Block: Not Supported 00:16:35.969 00:16:35.969 Firmware Slot Information 00:16:35.969 ========================= 00:16:35.969 Active slot: 1 00:16:35.969 Slot 1 Firmware Revision: 25.01 00:16:35.969 00:16:35.969 00:16:35.969 Commands Supported and Effects 00:16:35.969 ============================== 00:16:35.969 Admin Commands 00:16:35.969 -------------- 00:16:35.969 Get Log Page (02h): Supported 00:16:35.969 Identify (06h): Supported 00:16:35.969 Abort (08h): Supported 00:16:35.969 Set Features (09h): Supported 00:16:35.969 Get Features (0Ah): Supported 00:16:35.969 Asynchronous Event Request (0Ch): Supported 00:16:35.969 Keep Alive (18h): Supported 00:16:35.969 I/O Commands 00:16:35.969 ------------ 00:16:35.969 Flush (00h): Supported LBA-Change 00:16:35.969 Write (01h): Supported LBA-Change 00:16:35.969 Read (02h): Supported 00:16:35.969 Compare (05h): Supported 00:16:35.969 Write Zeroes (08h): Supported LBA-Change 00:16:35.969 Dataset Management (09h): Supported LBA-Change 00:16:35.969 Copy (19h): Supported LBA-Change 00:16:35.969 00:16:35.969 Error Log 00:16:35.969 ========= 00:16:35.969 00:16:35.969 Arbitration 00:16:35.969 =========== 00:16:35.969 Arbitration Burst: 1 00:16:35.969 00:16:35.969 Power Management 00:16:35.969 ================ 00:16:35.969 Number of Power States: 1 00:16:35.969 Current Power State: Power State #0 00:16:35.969 Power State #0: 00:16:35.969 Max Power: 0.00 W 00:16:35.969 Non-Operational State: Operational 00:16:35.969 Entry Latency: Not Reported 00:16:35.969 Exit Latency: Not Reported 00:16:35.969 Relative Read Throughput: 0 00:16:35.969 Relative Read Latency: 0 00:16:35.969 Relative Write Throughput: 0 00:16:35.969 Relative Write Latency: 0 00:16:35.969 Idle Power: Not Reported 00:16:35.969 Active Power: Not Reported 00:16:35.969 Non-Operational Permissive Mode: Not Supported 00:16:35.969 00:16:35.969 Health Information 00:16:35.969 ================== 00:16:35.969 Critical Warnings: 00:16:35.969 Available Spare Space: OK 00:16:35.969 Temperature: OK 00:16:35.969 Device Reliability: OK 00:16:35.969 Read Only: No 00:16:35.969 Volatile Memory Backup: OK 00:16:35.969 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:35.969 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:35.969 Available Spare: 0% 00:16:35.969 Available Sp[2024-12-05 20:36:29.187169] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:35.969 [2024-12-05 20:36:29.195063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:35.969 [2024-12-05 20:36:29.195088] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:16:35.969 [2024-12-05 20:36:29.195096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.969 [2024-12-05 20:36:29.195101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.969 [2024-12-05 20:36:29.195106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.969 [2024-12-05 20:36:29.195111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.969 [2024-12-05 20:36:29.195160] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:35.969 [2024-12-05 20:36:29.195169] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:35.969 [2024-12-05 20:36:29.196165] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:35.969 [2024-12-05 20:36:29.196204] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:16:35.969 [2024-12-05 20:36:29.196210] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:16:35.969 [2024-12-05 20:36:29.197165] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:35.969 [2024-12-05 20:36:29.197175] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:16:35.969 [2024-12-05 20:36:29.197223] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:35.969 [2024-12-05 20:36:29.198183] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:35.969 are Threshold: 0% 00:16:35.969 Life Percentage Used: 0% 00:16:35.969 Data Units Read: 0 00:16:35.969 Data Units Written: 0 00:16:35.969 Host Read Commands: 0 00:16:35.969 Host Write Commands: 0 00:16:35.969 Controller Busy Time: 0 minutes 00:16:35.969 Power Cycles: 0 00:16:35.969 Power On Hours: 0 hours 00:16:35.969 Unsafe Shutdowns: 0 00:16:35.969 Unrecoverable Media Errors: 0 00:16:35.969 Lifetime Error Log Entries: 0 00:16:35.969 Warning Temperature Time: 0 minutes 00:16:35.969 Critical Temperature Time: 0 minutes 00:16:35.969 00:16:35.969 Number of Queues 00:16:35.969 ================ 00:16:35.969 Number of I/O Submission Queues: 127 00:16:35.969 Number of I/O Completion Queues: 127 00:16:35.969 00:16:35.969 Active Namespaces 00:16:35.969 ================= 00:16:35.969 Namespace ID:1 00:16:35.969 Error Recovery Timeout: Unlimited 00:16:35.969 Command Set Identifier: NVM (00h) 00:16:35.969 Deallocate: Supported 00:16:35.969 Deallocated/Unwritten Error: Not Supported 00:16:35.969 Deallocated Read Value: Unknown 00:16:35.969 Deallocate in Write Zeroes: Not Supported 00:16:35.969 Deallocated Guard Field: 0xFFFF 00:16:35.969 Flush: Supported 00:16:35.969 Reservation: Supported 00:16:35.969 Namespace Sharing Capabilities: Multiple Controllers 00:16:35.969 Size (in LBAs): 131072 (0GiB) 00:16:35.969 Capacity (in LBAs): 131072 (0GiB) 00:16:35.969 Utilization (in LBAs): 131072 (0GiB) 00:16:35.969 NGUID: 18EC3B84907E48FE97A434E4B1EB026A 00:16:35.969 UUID: 18ec3b84-907e-48fe-97a4-34e4b1eb026a 00:16:35.969 Thin Provisioning: Not Supported 00:16:35.969 Per-NS Atomic Units: Yes 00:16:35.969 Atomic Boundary Size (Normal): 0 00:16:35.969 Atomic Boundary Size (PFail): 0 00:16:35.969 Atomic Boundary Offset: 0 00:16:35.969 Maximum Single Source Range Length: 65535 00:16:35.969 Maximum Copy Length: 65535 00:16:35.969 Maximum Source Range Count: 1 00:16:35.969 NGUID/EUI64 Never Reused: No 00:16:35.969 Namespace Write Protected: No 00:16:35.969 Number of LBA Formats: 1 00:16:35.969 Current LBA Format: LBA Format #00 00:16:35.969 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:35.969 00:16:35.969 20:36:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:36.229 [2024-12-05 20:36:29.415924] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:41.502 Initializing NVMe Controllers 00:16:41.502 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:41.502 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:41.502 Initialization complete. Launching workers. 00:16:41.502 ======================================================== 00:16:41.502 Latency(us) 00:16:41.502 Device Information : IOPS MiB/s Average min max 00:16:41.502 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39951.07 156.06 3203.76 898.26 8808.65 00:16:41.502 ======================================================== 00:16:41.502 Total : 39951.07 156.06 3203.76 898.26 8808.65 00:16:41.502 00:16:41.502 [2024-12-05 20:36:34.516310] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:41.502 20:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:41.502 [2024-12-05 20:36:34.738974] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:46.774 Initializing NVMe Controllers 00:16:46.774 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:46.774 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:46.774 Initialization complete. Launching workers. 00:16:46.774 ======================================================== 00:16:46.774 Latency(us) 00:16:46.774 Device Information : IOPS MiB/s Average min max 00:16:46.774 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39987.64 156.20 3200.83 901.84 9699.72 00:16:46.774 ======================================================== 00:16:46.774 Total : 39987.64 156.20 3200.83 901.84 9699.72 00:16:46.774 00:16:46.774 [2024-12-05 20:36:39.757662] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:46.774 20:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:46.774 [2024-12-05 20:36:39.960395] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:52.047 [2024-12-05 20:36:45.096147] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:52.047 Initializing NVMe Controllers 00:16:52.047 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:52.047 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:52.047 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:52.047 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:52.047 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:52.047 Initialization complete. Launching workers. 00:16:52.047 Starting thread on core 2 00:16:52.047 Starting thread on core 3 00:16:52.047 Starting thread on core 1 00:16:52.047 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:52.048 [2024-12-05 20:36:45.372442] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:55.327 [2024-12-05 20:36:48.444265] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:55.327 Initializing NVMe Controllers 00:16:55.327 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:55.327 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:55.327 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:55.327 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:55.327 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:55.327 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:55.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:55.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:55.327 Initialization complete. Launching workers. 00:16:55.327 Starting thread on core 1 with urgent priority queue 00:16:55.327 Starting thread on core 2 with urgent priority queue 00:16:55.327 Starting thread on core 3 with urgent priority queue 00:16:55.327 Starting thread on core 0 with urgent priority queue 00:16:55.327 SPDK bdev Controller (SPDK2 ) core 0: 9788.00 IO/s 10.22 secs/100000 ios 00:16:55.327 SPDK bdev Controller (SPDK2 ) core 1: 7584.67 IO/s 13.18 secs/100000 ios 00:16:55.327 SPDK bdev Controller (SPDK2 ) core 2: 11940.33 IO/s 8.37 secs/100000 ios 00:16:55.327 SPDK bdev Controller (SPDK2 ) core 3: 7303.00 IO/s 13.69 secs/100000 ios 00:16:55.327 ======================================================== 00:16:55.327 00:16:55.327 20:36:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:55.327 [2024-12-05 20:36:48.714497] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:55.327 Initializing NVMe Controllers 00:16:55.327 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:55.327 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:55.327 Namespace ID: 1 size: 0GB 00:16:55.327 Initialization complete. 00:16:55.327 INFO: using host memory buffer for IO 00:16:55.327 Hello world! 00:16:55.327 [2024-12-05 20:36:48.726577] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:55.327 20:36:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:55.587 [2024-12-05 20:36:48.987442] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:56.966 Initializing NVMe Controllers 00:16:56.966 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:56.966 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:56.966 Initialization complete. Launching workers. 00:16:56.966 submit (in ns) avg, min, max = 5850.5, 2888.2, 4004010.0 00:16:56.966 complete (in ns) avg, min, max = 20830.9, 1580.0, 4169955.5 00:16:56.966 00:16:56.966 Submit histogram 00:16:56.966 ================ 00:16:56.966 Range in us Cumulative Count 00:16:56.966 2.880 - 2.895: 0.0111% ( 2) 00:16:56.966 2.895 - 2.909: 0.0891% ( 14) 00:16:56.966 2.909 - 2.924: 0.1781% ( 16) 00:16:56.966 2.924 - 2.938: 0.4064% ( 41) 00:16:56.966 2.938 - 2.953: 0.8740% ( 84) 00:16:56.966 2.953 - 2.967: 2.4493% ( 283) 00:16:56.966 2.967 - 2.982: 5.2438% ( 502) 00:16:56.966 2.982 - 2.996: 9.3465% ( 737) 00:16:56.966 2.996 - 3.011: 14.6070% ( 945) 00:16:56.966 3.011 - 3.025: 20.4186% ( 1044) 00:16:56.966 3.025 - 3.040: 25.6958% ( 948) 00:16:56.966 3.040 - 3.055: 29.6760% ( 715) 00:16:56.966 3.055 - 3.069: 32.5317% ( 513) 00:16:56.966 3.069 - 3.084: 35.4876% ( 531) 00:16:56.966 3.084 - 3.098: 37.5696% ( 374) 00:16:56.966 3.098 - 3.113: 39.8798% ( 415) 00:16:56.966 3.113 - 3.127: 41.9172% ( 366) 00:16:56.966 3.127 - 3.142: 44.8118% ( 520) 00:16:56.966 3.142 - 3.156: 51.1690% ( 1142) 00:16:56.966 3.156 - 3.171: 57.4093% ( 1121) 00:16:56.966 3.171 - 3.185: 63.6440% ( 1120) 00:16:56.966 3.185 - 3.200: 68.6651% ( 902) 00:16:56.966 3.200 - 3.215: 73.7586% ( 915) 00:16:56.966 3.215 - 3.229: 78.1062% ( 781) 00:16:56.966 3.229 - 3.244: 81.1568% ( 548) 00:16:56.966 3.244 - 3.258: 83.0494% ( 340) 00:16:56.966 3.258 - 3.273: 84.2407% ( 214) 00:16:56.966 3.273 - 3.287: 85.1592% ( 165) 00:16:56.966 3.287 - 3.302: 85.8272% ( 120) 00:16:56.966 3.302 - 3.316: 86.5008% ( 121) 00:16:56.966 3.316 - 3.331: 87.2912% ( 142) 00:16:56.966 3.331 - 3.345: 88.0762% ( 141) 00:16:56.966 3.345 - 3.360: 88.7887% ( 128) 00:16:56.966 3.360 - 3.375: 89.4233% ( 114) 00:16:56.966 3.375 - 3.389: 90.0468% ( 112) 00:16:56.966 3.389 - 3.404: 90.5478% ( 90) 00:16:56.966 3.404 - 3.418: 91.3438% ( 143) 00:16:56.966 3.418 - 3.433: 92.2178% ( 157) 00:16:56.967 3.433 - 3.447: 92.9804% ( 137) 00:16:56.967 3.447 - 3.462: 93.6818% ( 126) 00:16:56.967 3.462 - 3.476: 94.3888% ( 127) 00:16:56.967 3.476 - 3.491: 95.0846% ( 125) 00:16:56.967 3.491 - 3.505: 95.6023% ( 93) 00:16:56.967 3.505 - 3.520: 96.0588% ( 82) 00:16:56.967 3.520 - 3.535: 96.4986% ( 79) 00:16:56.967 3.535 - 3.549: 96.7992% ( 54) 00:16:56.967 3.549 - 3.564: 97.0719% ( 49) 00:16:56.967 3.564 - 3.578: 97.2723% ( 36) 00:16:56.967 3.578 - 3.593: 97.4505% ( 32) 00:16:56.967 3.593 - 3.607: 97.5340% ( 15) 00:16:56.967 3.607 - 3.622: 97.6564% ( 22) 00:16:56.967 3.622 - 3.636: 97.7566% ( 18) 00:16:56.967 3.636 - 3.651: 97.8179% ( 11) 00:16:56.967 3.651 - 3.665: 97.9069% ( 16) 00:16:56.967 3.665 - 3.680: 97.9737% ( 12) 00:16:56.967 3.680 - 3.695: 98.0238% ( 9) 00:16:56.967 3.695 - 3.709: 98.0795% ( 10) 00:16:56.967 3.709 - 3.724: 98.1407% ( 11) 00:16:56.967 3.724 - 3.753: 98.2743% ( 24) 00:16:56.967 3.753 - 3.782: 98.4246% ( 27) 00:16:56.967 3.782 - 3.811: 98.5582% ( 24) 00:16:56.967 3.811 - 3.840: 98.6696% ( 20) 00:16:56.967 3.840 - 3.869: 98.7642% ( 17) 00:16:56.967 3.869 - 3.898: 98.8199% ( 10) 00:16:56.967 3.898 - 3.927: 98.9034% ( 15) 00:16:56.967 3.927 - 3.956: 98.9869% ( 15) 00:16:56.967 3.956 - 3.985: 99.0481% ( 11) 00:16:56.967 3.985 - 4.015: 99.1205% ( 13) 00:16:56.967 4.015 - 4.044: 99.1817% ( 11) 00:16:56.967 4.044 - 4.073: 99.2429% ( 11) 00:16:56.967 4.073 - 4.102: 99.2708% ( 5) 00:16:56.967 4.102 - 4.131: 99.3097% ( 7) 00:16:56.967 4.131 - 4.160: 99.3598% ( 9) 00:16:56.967 4.160 - 4.189: 99.3765% ( 3) 00:16:56.967 4.189 - 4.218: 99.4044% ( 5) 00:16:56.967 4.218 - 4.247: 99.4266% ( 4) 00:16:56.967 4.247 - 4.276: 99.4545% ( 5) 00:16:56.967 4.305 - 4.335: 99.4600% ( 1) 00:16:56.967 4.335 - 4.364: 99.4656% ( 1) 00:16:56.967 4.364 - 4.393: 99.4767% ( 2) 00:16:56.967 4.393 - 4.422: 99.4934% ( 3) 00:16:56.967 4.422 - 4.451: 99.4990% ( 1) 00:16:56.967 4.451 - 4.480: 99.5157% ( 3) 00:16:56.967 4.480 - 4.509: 99.5324% ( 3) 00:16:56.967 4.538 - 4.567: 99.5380% ( 1) 00:16:56.967 4.567 - 4.596: 99.5435% ( 1) 00:16:56.967 4.596 - 4.625: 99.5491% ( 1) 00:16:56.967 4.625 - 4.655: 99.5547% ( 1) 00:16:56.967 4.655 - 4.684: 99.5658% ( 2) 00:16:56.967 4.713 - 4.742: 99.5769% ( 2) 00:16:56.967 4.742 - 4.771: 99.5825% ( 1) 00:16:56.967 4.771 - 4.800: 99.5881% ( 1) 00:16:56.967 4.800 - 4.829: 99.5936% ( 1) 00:16:56.967 4.829 - 4.858: 99.5992% ( 1) 00:16:56.967 4.858 - 4.887: 99.6048% ( 1) 00:16:56.967 4.916 - 4.945: 99.6103% ( 1) 00:16:56.967 5.004 - 5.033: 99.6159% ( 1) 00:16:56.967 5.033 - 5.062: 99.6270% ( 2) 00:16:56.967 5.062 - 5.091: 99.6326% ( 1) 00:16:56.967 5.120 - 5.149: 99.6382% ( 1) 00:16:56.967 5.178 - 5.207: 99.6437% ( 1) 00:16:56.967 5.236 - 5.265: 99.6493% ( 1) 00:16:56.967 5.295 - 5.324: 99.6549% ( 1) 00:16:56.967 5.324 - 5.353: 99.6604% ( 1) 00:16:56.967 5.353 - 5.382: 99.6660% ( 1) 00:16:56.967 5.411 - 5.440: 99.6771% ( 2) 00:16:56.967 5.469 - 5.498: 99.6827% ( 1) 00:16:56.967 5.527 - 5.556: 99.6883% ( 1) 00:16:56.967 5.556 - 5.585: 99.7050% ( 3) 00:16:56.967 5.702 - 5.731: 99.7105% ( 1) 00:16:56.967 5.760 - 5.789: 99.7272% ( 3) 00:16:56.967 5.847 - 5.876: 99.7328% ( 1) 00:16:56.967 5.876 - 5.905: 99.7384% ( 1) 00:16:56.967 5.905 - 5.935: 99.7439% ( 1) 00:16:56.967 6.051 - 6.080: 99.7495% ( 1) 00:16:56.967 6.109 - 6.138: 99.7551% ( 1) 00:16:56.967 6.167 - 6.196: 99.7662% ( 2) 00:16:56.967 6.196 - 6.225: 99.7718% ( 1) 00:16:56.967 6.225 - 6.255: 99.7773% ( 1) 00:16:56.967 6.255 - 6.284: 99.7885% ( 2) 00:16:56.967 6.400 - 6.429: 99.7940% ( 1) 00:16:56.967 6.429 - 6.458: 99.8052% ( 2) 00:16:56.967 6.487 - 6.516: 99.8107% ( 1) 00:16:56.967 6.545 - 6.575: 99.8219% ( 2) 00:16:56.967 7.098 - 7.127: 99.8274% ( 1) 00:16:56.967 7.127 - 7.156: 99.8386% ( 2) 00:16:56.967 7.331 - 7.360: 99.8441% ( 1) 00:16:56.967 7.796 - 7.855: 99.8497% ( 1) 00:16:56.967 8.436 - 8.495: 99.8553% ( 1) 00:16:56.967 10.240 - 10.298: 99.8608% ( 1) 00:16:56.967 10.415 - 10.473: 99.8664% ( 1) 00:16:56.967 11.113 - 11.171: 99.8720% ( 1) 00:16:56.967 13.382 - 13.440: 99.8831% ( 2) 00:16:56.967 13.440 - 13.498: 99.8887% ( 1) 00:16:56.967 13.498 - 13.556: 99.8942% ( 1) 00:16:56.967 13.615 - 13.673: 99.8998% ( 1) 00:16:56.967 15.476 - 15.593: 99.9054% ( 1) 00:16:56.967 17.804 - 17.920: 99.9109% ( 1) 00:16:56.967 18.967 - 19.084: 99.9165% ( 1) 00:16:56.967 19.084 - 19.200: 99.9276% ( 2) 00:16:56.967 21.527 - 21.644: 99.9332% ( 1) 00:16:56.967 3991.738 - 4021.527: 100.0000% ( 12) 00:16:56.967 00:16:56.967 Complete histogram 00:16:56.967 ================== 00:16:56.967 Range in us Cumulative Count 00:16:56.967 1.578 - 1.585: 0.0167% ( 3) 00:16:56.967 1.585 - 1.593: 0.1670% ( 27) 00:16:56.967 1.593 - 1.600: 0.4509% ( 51) 00:16:56.967 1.600 - 1.607: 0.6457% ( 35) 00:16:56.967 1.607 - 1.615: 0.7237% ( 14) 00:16:56.967 1.615 - 1.622: 0.7404% ( 3) 00:16:56.967 1.622 - 1.629: 1.2080% ( 84) 00:16:56.967 1.629 - 1.636: 5.1715% ( 712) 00:16:56.967 1.636 - 1.644: 13.0094% ( 1408) 00:16:56.967 1.644 - 1.651: 17.5796% ( 821) 00:16:56.967 1.651 - 1.658: 19.2663% ( 303) 00:16:56.967 1.658 - 1.665: 20.1347% ( 156) 00:16:56.967 1.665 - 1.673: 21.6934% ( 280) 00:16:56.967 1.673 - 1.680: 32.4928% ( 1940) 00:16:56.967 1.680 - 1.687: 61.4173% ( 5196) 00:16:56.967 1.687 - 1.695: 82.0753% ( 3711) 00:16:56.967 1.695 - 1.702: 88.9668% ( 1238) 00:16:56.967 1.702 - 1.709: 91.6500% ( 482) 00:16:56.967 1.709 - 1.716: 93.2810% ( 293) 00:16:56.967 1.716 - 1.724: 93.9880% ( 127) 00:16:56.967 1.724 - 1.731: 94.2607% ( 49) 00:16:56.967 1.731 - 1.738: 94.4556% ( 35) 00:16:56.967 1.738 - 1.745: 94.6170% ( 29) 00:16:56.967 1.745 - 1.753: 94.8508% ( 42) 00:16:56.967 1.753 - 1.760: 95.1792% ( 59) 00:16:56.967 1.760 - 1.767: 95.4743% ( 53) 00:16:56.967 1.767 - 1.775: 95.6246% ( 27) 00:16:56.967 1.775 - 1.782: 95.7081% ( 15) 00:16:56.967 1.782 - 1.789: 95.7415% ( 6) 00:16:56.967 1.789 - 1.796: 95.7637% ( 4) 00:16:56.967 1.796 - 1.804: 95.7749% ( 2) 00:16:56.967 1.811 - 1.818: 95.7804% ( 1) 00:16:56.967 1.818 - 1.825: 95.8083% ( 5) 00:16:56.967 1.825 - 1.833: 95.8640% ( 10) 00:16:56.967 1.833 - 1.840: 95.9419% ( 14) 00:16:56.967 1.840 - 1.847: 95.9753% ( 6) 00:16:56.967 1.847 - 1.855: 96.0477% ( 13) 00:16:56.967 1.855 - 1.862: 96.1145% ( 12) 00:16:56.967 1.862 - 1.876: 96.2815% ( 30) 00:16:56.967 1.876 - 1.891: 96.6043% ( 58) 00:16:56.967 1.891 - 1.905: 96.9495% ( 62) 00:16:56.967 1.905 - 1.920: 97.2501% ( 54) 00:16:56.967 1.920 - 1.935: 97.4727% ( 40) 00:16:56.967 1.935 - 1.949: 97.7010% ( 41) 00:16:56.967 1.949 - 1.964: 97.8401% ( 25) 00:16:56.967 1.964 - 1.978: 97.9737% ( 24) 00:16:56.967 1.978 - 1.993: 98.0628% ( 16) 00:16:56.967 1.993 - 2.007: 98.1686% ( 19) 00:16:56.967 2.007 - 2.022: 98.2743% ( 19) 00:16:56.967 2.022 - 2.036: 98.3745% ( 18) 00:16:56.967 2.036 - 2.051: 98.4358% ( 11) 00:16:56.967 2.051 - 2.065: 98.5471% ( 20) 00:16:56.967 2.065 - 2.080: 98.6974% ( 27) 00:16:56.967 2.080 - 2.095: 98.7809% ( 15) 00:16:56.967 2.095 - 2.109: 98.8199% ( 7) 00:16:56.967 2.109 - 2.124: 98.8366% ( 3) 00:16:56.967 2.124 - 2.138: 98.8700% ( 6) 00:16:56.967 2.138 - 2.153: 98.8978% ( 5) 00:16:56.967 2.153 - 2.167: 98.9201% ( 4) 00:16:56.967 2.167 - 2.182: 98.9312% ( 2) 00:16:56.967 2.182 - 2.196: 98.9646% ( 6) 00:16:56.967 2.196 - 2.211: 98.9924% ( 5) 00:16:56.967 2.211 - 2.225: 99.0091% ( 3) 00:16:56.967 2.225 - 2.240: 99.0314% ( 4) 00:16:56.967 2.240 - 2.255: 99.0425% ( 2) 00:16:56.967 2.255 - 2.269: 99.0537% ( 2) 00:16:56.967 2.269 - 2.284: 99.0704% ( 3) 00:16:56.967 2.284 - 2.298: 99.0871% ( 3) 00:16:56.967 2.298 - 2.313: 99.1093% ( 4) 00:16:56.967 2.313 - 2.327: 99.1205% ( 2) 00:16:56.967 2.327 - 2.342: 99.1260% ( 1) 00:16:56.967 2.342 - 2.356: 99.1316% ( 1) 00:16:56.967 2.356 - 2.371: 99.1372% ( 1) 00:16:56.967 2.371 - 2.385: 99.1483% ( 2) 00:16:56.967 2.385 - 2.400: 99.1539% ( 1) 00:16:56.967 2.400 - 2.415: 99.1706% ( 3) 00:16:56.967 2.415 - 2.429: 99.1761% ( 1) 00:16:56.967 2.429 - 2.444: 99.1817% ( 1) 00:16:56.967 2.444 - 2.458: 99.1873% ( 1) 00:16:56.967 2.458 - 2.473: 99.1984% ( 2) 00:16:56.967 2.487 - 2.502: 99.2040% ( 1) 00:16:56.967 2.502 - 2.516: 99.2262% ( 4) 00:16:56.967 2.516 - 2.531: 99.2318% ( 1) 00:16:56.967 2.560 - 2.575: 99.2429% ( 2) 00:16:56.967 2.604 - 2.618: 99.2596% ( 3) 00:16:56.967 2.618 - 2.633: 99.2652% ( 1) 00:16:56.967 2.676 - 2.691: 99.2708% ( 1) 00:16:56.967 2.953 - 2.967: 99.2763% ( 1) 00:16:56.967 2.982 - 2.996: 99.2819% ( 1) 00:16:56.967 3.476 - 3.491: 99.2875% ( 1) 00:16:56.967 3.593 - 3.607: 99.3042% ( 3) 00:16:56.967 3.607 - 3.622: 99.3097% ( 1) 00:16:56.967 3.651 - 3.665: 99.3153% ( 1) 00:16:56.967 3.680 - 3.695: 99.3209% ( 1) 00:16:56.967 3.753 - 3.782: 99.3264% ( 1) 00:16:56.967 3.869 - 3.898: 99.3431% ( 3) 00:16:56.967 3.898 - 3.927: 99.3487% ( 1) 00:16:56.967 3.985 - 4.015: 99.3543% ( 1) 00:16:56.967 4.044 - 4.073: 99.3654% ( 2) 00:16:56.967 4.102 - 4.131: 99.3710% ( 1) 00:16:56.967 4.131 - 4.160: 99.3765% ( 1) 00:16:56.967 4.189 - 4.218: 99.3821% ( 1) 00:16:56.967 4.305 - 4.335: 99.3877% ( 1) 00:16:56.967 4.393 - 4.422: 99.3932% ( 1) 00:16:56.967 4.422 - 4.451: 99.3988% ( 1) 00:16:56.967 4.451 - 4.480: 99.4155% ( 3) 00:16:56.967 4.596 - 4.625: 99.4211% ( 1) 00:16:56.967 4.713 - 4.742: 99.4266% ( 1) 00:16:56.967 4.975 - 5.004: 99.4322% ( 1) 00:16:56.967 5.033 - 5.062: 9[2024-12-05 20:36:50.088933] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:56.967 9.4378% ( 1) 00:16:56.967 5.062 - 5.091: 99.4433% ( 1) 00:16:56.967 5.265 - 5.295: 99.4489% ( 1) 00:16:56.967 5.295 - 5.324: 99.4545% ( 1) 00:16:56.967 5.440 - 5.469: 99.4600% ( 1) 00:16:56.967 6.051 - 6.080: 99.4656% ( 1) 00:16:56.967 7.273 - 7.302: 99.4712% ( 1) 00:16:56.967 8.320 - 8.378: 99.4767% ( 1) 00:16:56.967 10.589 - 10.647: 99.4823% ( 1) 00:16:56.967 11.869 - 11.927: 99.4879% ( 1) 00:16:56.967 11.927 - 11.985: 99.4934% ( 1) 00:16:56.967 17.455 - 17.571: 99.4990% ( 1) 00:16:56.967 22.575 - 22.691: 99.5046% ( 1) 00:16:56.967 37.004 - 37.236: 99.5101% ( 1) 00:16:56.967 60.509 - 60.975: 99.5157% ( 1) 00:16:56.967 997.935 - 1005.382: 99.5213% ( 1) 00:16:56.967 2874.647 - 2889.542: 99.5268% ( 1) 00:16:56.967 3991.738 - 4021.527: 99.9889% ( 83) 00:16:56.967 4021.527 - 4051.316: 99.9944% ( 1) 00:16:56.967 4140.684 - 4170.473: 100.0000% ( 1) 00:16:56.967 00:16:56.968 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:56.968 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:56.968 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:56.968 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:56.968 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:56.968 [ 00:16:56.968 { 00:16:56.968 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:56.968 "subtype": "Discovery", 00:16:56.968 "listen_addresses": [], 00:16:56.968 "allow_any_host": true, 00:16:56.968 "hosts": [] 00:16:56.968 }, 00:16:56.968 { 00:16:56.968 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:56.968 "subtype": "NVMe", 00:16:56.968 "listen_addresses": [ 00:16:56.968 { 00:16:56.968 "trtype": "VFIOUSER", 00:16:56.968 "adrfam": "IPv4", 00:16:56.968 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:56.968 "trsvcid": "0" 00:16:56.968 } 00:16:56.968 ], 00:16:56.968 "allow_any_host": true, 00:16:56.968 "hosts": [], 00:16:56.968 "serial_number": "SPDK1", 00:16:56.968 "model_number": "SPDK bdev Controller", 00:16:56.968 "max_namespaces": 32, 00:16:56.968 "min_cntlid": 1, 00:16:56.968 "max_cntlid": 65519, 00:16:56.968 "namespaces": [ 00:16:56.968 { 00:16:56.968 "nsid": 1, 00:16:56.968 "bdev_name": "Malloc1", 00:16:56.968 "name": "Malloc1", 00:16:56.968 "nguid": "E081F51D279D4392A41CA371B1AC2AD9", 00:16:56.968 "uuid": "e081f51d-279d-4392-a41c-a371b1ac2ad9" 00:16:56.968 }, 00:16:56.968 { 00:16:56.968 "nsid": 2, 00:16:56.968 "bdev_name": "Malloc3", 00:16:56.968 "name": "Malloc3", 00:16:56.968 "nguid": "1E199CB3AF414C5EA2F3D9D4BF7470D4", 00:16:56.968 "uuid": "1e199cb3-af41-4c5e-a2f3-d9d4bf7470d4" 00:16:56.968 } 00:16:56.968 ] 00:16:56.968 }, 00:16:56.968 { 00:16:56.968 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:56.968 "subtype": "NVMe", 00:16:56.968 "listen_addresses": [ 00:16:56.968 { 00:16:56.968 "trtype": "VFIOUSER", 00:16:56.968 "adrfam": "IPv4", 00:16:56.968 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:56.968 "trsvcid": "0" 00:16:56.968 } 00:16:56.968 ], 00:16:56.968 "allow_any_host": true, 00:16:56.968 "hosts": [], 00:16:56.968 "serial_number": "SPDK2", 00:16:56.968 "model_number": "SPDK bdev Controller", 00:16:56.968 "max_namespaces": 32, 00:16:56.968 "min_cntlid": 1, 00:16:56.968 "max_cntlid": 65519, 00:16:56.968 "namespaces": [ 00:16:56.968 { 00:16:56.968 "nsid": 1, 00:16:56.968 "bdev_name": "Malloc2", 00:16:56.968 "name": "Malloc2", 00:16:56.968 "nguid": "18EC3B84907E48FE97A434E4B1EB026A", 00:16:56.968 "uuid": "18ec3b84-907e-48fe-97a4-34e4b1eb026a" 00:16:56.968 } 00:16:56.968 ] 00:16:56.968 } 00:16:56.968 ] 00:16:56.968 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:56.968 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=332731 00:16:56.968 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:56.968 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:56.968 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:56.968 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:56.968 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:16:56.968 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:16:56.968 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:16:57.227 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:57.227 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:16:57.228 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:16:57.228 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:16:57.228 [2024-12-05 20:36:50.482447] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:57.228 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:57.228 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:57.228 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:57.228 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:57.228 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:57.486 Malloc4 00:16:57.486 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:57.486 [2024-12-05 20:36:50.899458] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:57.486 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:57.745 Asynchronous Event Request test 00:16:57.745 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:57.745 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:57.745 Registering asynchronous event callbacks... 00:16:57.745 Starting namespace attribute notice tests for all controllers... 00:16:57.745 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:57.745 aer_cb - Changed Namespace 00:16:57.745 Cleaning up... 00:16:57.745 [ 00:16:57.745 { 00:16:57.745 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:57.745 "subtype": "Discovery", 00:16:57.745 "listen_addresses": [], 00:16:57.745 "allow_any_host": true, 00:16:57.745 "hosts": [] 00:16:57.745 }, 00:16:57.745 { 00:16:57.745 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:57.745 "subtype": "NVMe", 00:16:57.745 "listen_addresses": [ 00:16:57.745 { 00:16:57.745 "trtype": "VFIOUSER", 00:16:57.745 "adrfam": "IPv4", 00:16:57.745 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:57.745 "trsvcid": "0" 00:16:57.745 } 00:16:57.745 ], 00:16:57.745 "allow_any_host": true, 00:16:57.745 "hosts": [], 00:16:57.745 "serial_number": "SPDK1", 00:16:57.745 "model_number": "SPDK bdev Controller", 00:16:57.745 "max_namespaces": 32, 00:16:57.745 "min_cntlid": 1, 00:16:57.745 "max_cntlid": 65519, 00:16:57.745 "namespaces": [ 00:16:57.745 { 00:16:57.745 "nsid": 1, 00:16:57.745 "bdev_name": "Malloc1", 00:16:57.745 "name": "Malloc1", 00:16:57.745 "nguid": "E081F51D279D4392A41CA371B1AC2AD9", 00:16:57.745 "uuid": "e081f51d-279d-4392-a41c-a371b1ac2ad9" 00:16:57.745 }, 00:16:57.745 { 00:16:57.745 "nsid": 2, 00:16:57.745 "bdev_name": "Malloc3", 00:16:57.745 "name": "Malloc3", 00:16:57.745 "nguid": "1E199CB3AF414C5EA2F3D9D4BF7470D4", 00:16:57.745 "uuid": "1e199cb3-af41-4c5e-a2f3-d9d4bf7470d4" 00:16:57.745 } 00:16:57.745 ] 00:16:57.745 }, 00:16:57.745 { 00:16:57.745 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:57.745 "subtype": "NVMe", 00:16:57.745 "listen_addresses": [ 00:16:57.745 { 00:16:57.745 "trtype": "VFIOUSER", 00:16:57.745 "adrfam": "IPv4", 00:16:57.745 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:57.745 "trsvcid": "0" 00:16:57.745 } 00:16:57.745 ], 00:16:57.745 "allow_any_host": true, 00:16:57.745 "hosts": [], 00:16:57.745 "serial_number": "SPDK2", 00:16:57.745 "model_number": "SPDK bdev Controller", 00:16:57.745 "max_namespaces": 32, 00:16:57.745 "min_cntlid": 1, 00:16:57.745 "max_cntlid": 65519, 00:16:57.745 "namespaces": [ 00:16:57.745 { 00:16:57.745 "nsid": 1, 00:16:57.745 "bdev_name": "Malloc2", 00:16:57.745 "name": "Malloc2", 00:16:57.745 "nguid": "18EC3B84907E48FE97A434E4B1EB026A", 00:16:57.745 "uuid": "18ec3b84-907e-48fe-97a4-34e4b1eb026a" 00:16:57.745 }, 00:16:57.745 { 00:16:57.745 "nsid": 2, 00:16:57.745 "bdev_name": "Malloc4", 00:16:57.745 "name": "Malloc4", 00:16:57.745 "nguid": "831D6EA41F3A40A0854C7B57F9BDB219", 00:16:57.745 "uuid": "831d6ea4-1f3a-40a0-854c-7b57f9bdb219" 00:16:57.745 } 00:16:57.745 ] 00:16:57.745 } 00:16:57.745 ] 00:16:57.745 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 332731 00:16:57.745 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:57.745 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 323888 00:16:57.745 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 323888 ']' 00:16:57.745 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 323888 00:16:57.745 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:57.745 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.745 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 323888 00:16:57.745 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:57.745 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:57.745 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 323888' 00:16:57.745 killing process with pid 323888 00:16:57.745 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 323888 00:16:57.745 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 323888 00:16:58.004 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:58.004 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:58.004 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:58.005 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:58.005 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:58.005 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=332800 00:16:58.005 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 332800' 00:16:58.005 Process pid: 332800 00:16:58.005 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:58.005 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:58.005 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 332800 00:16:58.005 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 332800 ']' 00:16:58.005 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.005 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:58.005 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.005 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:58.005 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:58.263 [2024-12-05 20:36:51.453303] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:58.263 [2024-12-05 20:36:51.454129] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:16:58.263 [2024-12-05 20:36:51.454164] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.263 [2024-12-05 20:36:51.528909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:58.263 [2024-12-05 20:36:51.566727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:58.263 [2024-12-05 20:36:51.566763] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:58.263 [2024-12-05 20:36:51.566770] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:58.263 [2024-12-05 20:36:51.566775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:58.263 [2024-12-05 20:36:51.566779] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:58.263 [2024-12-05 20:36:51.568172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.263 [2024-12-05 20:36:51.568288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.263 [2024-12-05 20:36:51.568397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.263 [2024-12-05 20:36:51.568399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:58.263 [2024-12-05 20:36:51.635490] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:58.263 [2024-12-05 20:36:51.635885] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:58.263 [2024-12-05 20:36:51.636293] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:58.263 [2024-12-05 20:36:51.636502] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:58.263 [2024-12-05 20:36:51.636554] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:58.263 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:58.263 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:58.263 20:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:59.643 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:59.643 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:59.643 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:59.643 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:59.643 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:59.643 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:59.643 Malloc1 00:16:59.901 20:36:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:59.901 20:36:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:00.159 20:36:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:00.417 20:36:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:00.417 20:36:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:00.417 20:36:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:00.417 Malloc2 00:17:00.417 20:36:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:00.677 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:00.936 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:01.196 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:01.196 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 332800 00:17:01.196 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 332800 ']' 00:17:01.196 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 332800 00:17:01.196 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:17:01.196 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:01.196 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 332800 00:17:01.196 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:01.196 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:01.196 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 332800' 00:17:01.196 killing process with pid 332800 00:17:01.196 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 332800 00:17:01.196 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 332800 00:17:01.455 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:01.455 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:01.455 00:17:01.455 real 0m51.278s 00:17:01.455 user 3m18.519s 00:17:01.455 sys 0m3.042s 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:01.456 ************************************ 00:17:01.456 END TEST nvmf_vfio_user 00:17:01.456 ************************************ 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:01.456 ************************************ 00:17:01.456 START TEST nvmf_vfio_user_nvme_compliance 00:17:01.456 ************************************ 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:01.456 * Looking for test storage... 00:17:01.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:01.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.456 --rc genhtml_branch_coverage=1 00:17:01.456 --rc genhtml_function_coverage=1 00:17:01.456 --rc genhtml_legend=1 00:17:01.456 --rc geninfo_all_blocks=1 00:17:01.456 --rc geninfo_unexecuted_blocks=1 00:17:01.456 00:17:01.456 ' 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:01.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.456 --rc genhtml_branch_coverage=1 00:17:01.456 --rc genhtml_function_coverage=1 00:17:01.456 --rc genhtml_legend=1 00:17:01.456 --rc geninfo_all_blocks=1 00:17:01.456 --rc geninfo_unexecuted_blocks=1 00:17:01.456 00:17:01.456 ' 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:01.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.456 --rc genhtml_branch_coverage=1 00:17:01.456 --rc genhtml_function_coverage=1 00:17:01.456 --rc genhtml_legend=1 00:17:01.456 --rc geninfo_all_blocks=1 00:17:01.456 --rc geninfo_unexecuted_blocks=1 00:17:01.456 00:17:01.456 ' 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:01.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.456 --rc genhtml_branch_coverage=1 00:17:01.456 --rc genhtml_function_coverage=1 00:17:01.456 --rc genhtml_legend=1 00:17:01.456 --rc geninfo_all_blocks=1 00:17:01.456 --rc geninfo_unexecuted_blocks=1 00:17:01.456 00:17:01.456 ' 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.456 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.715 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:01.715 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:17:01.715 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.715 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.715 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:01.715 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:01.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=333652 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 333652' 00:17:01.716 Process pid: 333652 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 333652 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 333652 ']' 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:01.716 20:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:01.716 [2024-12-05 20:36:54.968792] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:17:01.716 [2024-12-05 20:36:54.968837] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.716 [2024-12-05 20:36:55.040541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:01.716 [2024-12-05 20:36:55.079001] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.716 [2024-12-05 20:36:55.079034] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.716 [2024-12-05 20:36:55.079040] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:01.716 [2024-12-05 20:36:55.079046] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:01.716 [2024-12-05 20:36:55.079053] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.716 [2024-12-05 20:36:55.080412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.716 [2024-12-05 20:36:55.080527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.716 [2024-12-05 20:36:55.080529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:01.975 20:36:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:01.975 20:36:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:17:01.975 20:36:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:02.913 20:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:02.913 20:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:02.913 20:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:02.913 20:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.913 20:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:02.913 20:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.913 20:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:02.913 20:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:02.913 20:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.913 20:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:02.913 malloc0 00:17:02.913 20:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.913 20:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:02.913 20:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.913 20:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:02.913 20:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.913 20:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:02.913 20:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.913 20:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:02.913 20:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.913 20:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:02.913 20:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.913 20:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:02.913 20:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.913 20:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:03.173 00:17:03.173 00:17:03.173 CUnit - A unit testing framework for C - Version 2.1-3 00:17:03.173 http://cunit.sourceforge.net/ 00:17:03.173 00:17:03.173 00:17:03.173 Suite: nvme_compliance 00:17:03.173 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-05 20:36:56.408953] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:03.173 [2024-12-05 20:36:56.410272] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:03.173 [2024-12-05 20:36:56.410286] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:03.173 [2024-12-05 20:36:56.410292] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:03.173 [2024-12-05 20:36:56.411975] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:03.173 passed 00:17:03.173 Test: admin_identify_ctrlr_verify_fused ...[2024-12-05 20:36:56.487506] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:03.173 [2024-12-05 20:36:56.490528] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:03.173 passed 00:17:03.173 Test: admin_identify_ns ...[2024-12-05 20:36:56.563593] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:03.434 [2024-12-05 20:36:56.623073] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:03.434 [2024-12-05 20:36:56.631067] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:03.434 [2024-12-05 20:36:56.652161] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:03.434 passed 00:17:03.434 Test: admin_get_features_mandatory_features ...[2024-12-05 20:36:56.723903] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:03.434 [2024-12-05 20:36:56.728928] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:03.434 passed 00:17:03.434 Test: admin_get_features_optional_features ...[2024-12-05 20:36:56.802387] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:03.434 [2024-12-05 20:36:56.805403] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:03.434 passed 00:17:03.693 Test: admin_set_features_number_of_queues ...[2024-12-05 20:36:56.875561] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:03.693 [2024-12-05 20:36:56.980151] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:03.693 passed 00:17:03.693 Test: admin_get_log_page_mandatory_logs ...[2024-12-05 20:36:57.054819] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:03.693 [2024-12-05 20:36:57.057845] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:03.693 passed 00:17:03.693 Test: admin_get_log_page_with_lpo ...[2024-12-05 20:36:57.130078] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:03.952 [2024-12-05 20:36:57.200069] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:03.953 [2024-12-05 20:36:57.213119] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:03.953 passed 00:17:03.953 Test: fabric_property_get ...[2024-12-05 20:36:57.283811] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:03.953 [2024-12-05 20:36:57.285020] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:03.953 [2024-12-05 20:36:57.286834] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:03.953 passed 00:17:03.953 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-05 20:36:57.360489] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:03.953 [2024-12-05 20:36:57.361706] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:03.953 [2024-12-05 20:36:57.363507] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:03.953 passed 00:17:04.212 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-05 20:36:57.435689] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:04.212 [2024-12-05 20:36:57.523067] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:04.212 [2024-12-05 20:36:57.539063] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:04.212 [2024-12-05 20:36:57.544160] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:04.212 passed 00:17:04.212 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-05 20:36:57.614974] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:04.212 [2024-12-05 20:36:57.616198] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:04.212 [2024-12-05 20:36:57.617993] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:04.212 passed 00:17:04.471 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-05 20:36:57.690873] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:04.471 [2024-12-05 20:36:57.766067] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:04.471 [2024-12-05 20:36:57.790065] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:04.471 [2024-12-05 20:36:57.795140] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:04.471 passed 00:17:04.471 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-05 20:36:57.869777] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:04.471 [2024-12-05 20:36:57.870992] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:04.471 [2024-12-05 20:36:57.871013] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:04.471 [2024-12-05 20:36:57.872801] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:04.471 passed 00:17:04.731 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-05 20:36:57.944992] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:04.731 [2024-12-05 20:36:58.034074] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:04.731 [2024-12-05 20:36:58.042063] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:04.731 [2024-12-05 20:36:58.050067] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:04.731 [2024-12-05 20:36:58.058075] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:04.731 [2024-12-05 20:36:58.087149] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:04.731 passed 00:17:04.731 Test: admin_create_io_sq_verify_pc ...[2024-12-05 20:36:58.161856] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:04.990 [2024-12-05 20:36:58.178073] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:04.990 [2024-12-05 20:36:58.195035] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:04.990 passed 00:17:04.990 Test: admin_create_io_qp_max_qps ...[2024-12-05 20:36:58.266502] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:05.929 [2024-12-05 20:36:59.355066] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:17:06.499 [2024-12-05 20:36:59.753730] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:06.499 passed 00:17:06.499 Test: admin_create_io_sq_shared_cq ...[2024-12-05 20:36:59.826904] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:06.759 [2024-12-05 20:36:59.958066] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:06.759 [2024-12-05 20:36:59.995126] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:06.759 passed 00:17:06.759 00:17:06.759 Run Summary: Type Total Ran Passed Failed Inactive 00:17:06.759 suites 1 1 n/a 0 0 00:17:06.759 tests 18 18 18 0 0 00:17:06.759 asserts 360 360 360 0 n/a 00:17:06.759 00:17:06.759 Elapsed time = 1.472 seconds 00:17:06.759 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 333652 00:17:06.759 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 333652 ']' 00:17:06.759 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 333652 00:17:06.759 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:17:06.759 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:06.759 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 333652 00:17:06.759 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:06.759 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:06.759 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 333652' 00:17:06.759 killing process with pid 333652 00:17:06.759 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 333652 00:17:06.759 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 333652 00:17:07.019 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:07.019 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:07.019 00:17:07.019 real 0m5.555s 00:17:07.019 user 0m15.530s 00:17:07.019 sys 0m0.512s 00:17:07.019 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:07.019 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:07.019 ************************************ 00:17:07.019 END TEST nvmf_vfio_user_nvme_compliance 00:17:07.019 ************************************ 00:17:07.019 20:37:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:07.019 20:37:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:07.019 20:37:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:07.019 20:37:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:07.019 ************************************ 00:17:07.019 START TEST nvmf_vfio_user_fuzz 00:17:07.019 ************************************ 00:17:07.019 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:07.019 * Looking for test storage... 00:17:07.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:07.019 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:07.019 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:17:07.019 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:07.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.280 --rc genhtml_branch_coverage=1 00:17:07.280 --rc genhtml_function_coverage=1 00:17:07.280 --rc genhtml_legend=1 00:17:07.280 --rc geninfo_all_blocks=1 00:17:07.280 --rc geninfo_unexecuted_blocks=1 00:17:07.280 00:17:07.280 ' 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:07.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.280 --rc genhtml_branch_coverage=1 00:17:07.280 --rc genhtml_function_coverage=1 00:17:07.280 --rc genhtml_legend=1 00:17:07.280 --rc geninfo_all_blocks=1 00:17:07.280 --rc geninfo_unexecuted_blocks=1 00:17:07.280 00:17:07.280 ' 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:07.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.280 --rc genhtml_branch_coverage=1 00:17:07.280 --rc genhtml_function_coverage=1 00:17:07.280 --rc genhtml_legend=1 00:17:07.280 --rc geninfo_all_blocks=1 00:17:07.280 --rc geninfo_unexecuted_blocks=1 00:17:07.280 00:17:07.280 ' 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:07.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.280 --rc genhtml_branch_coverage=1 00:17:07.280 --rc genhtml_function_coverage=1 00:17:07.280 --rc genhtml_legend=1 00:17:07.280 --rc geninfo_all_blocks=1 00:17:07.280 --rc geninfo_unexecuted_blocks=1 00:17:07.280 00:17:07.280 ' 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:07.280 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:07.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=334720 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 334720' 00:17:07.281 Process pid: 334720 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 334720 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 334720 ']' 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:07.281 20:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:08.219 20:37:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:08.219 20:37:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:17:08.220 20:37:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:09.160 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:09.160 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.160 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:09.160 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.160 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:09.160 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:09.160 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.160 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:09.160 malloc0 00:17:09.160 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.160 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:09.160 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.160 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:09.160 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.160 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:09.160 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.160 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:09.160 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.160 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:09.160 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.160 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:09.160 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.160 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:09.160 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:41.246 Fuzzing completed. Shutting down the fuzz application 00:17:41.246 00:17:41.246 Dumping successful admin opcodes: 00:17:41.246 9, 10, 00:17:41.246 Dumping successful io opcodes: 00:17:41.246 0, 00:17:41.246 NS: 0x20000081ef00 I/O qp, Total commands completed: 1229022, total successful commands: 4827, random_seed: 1266257344 00:17:41.246 NS: 0x20000081ef00 admin qp, Total commands completed: 303568, total successful commands: 74, random_seed: 1032080320 00:17:41.246 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:41.246 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.246 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:41.246 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.246 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 334720 00:17:41.246 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 334720 ']' 00:17:41.246 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 334720 00:17:41.246 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:17:41.246 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:41.246 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 334720 00:17:41.246 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:41.246 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:41.246 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 334720' 00:17:41.246 killing process with pid 334720 00:17:41.246 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 334720 00:17:41.246 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 334720 00:17:41.246 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:41.247 00:17:41.247 real 0m32.825s 00:17:41.247 user 0m35.124s 00:17:41.247 sys 0m26.692s 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:41.247 ************************************ 00:17:41.247 END TEST nvmf_vfio_user_fuzz 00:17:41.247 ************************************ 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:41.247 ************************************ 00:17:41.247 START TEST nvmf_auth_target 00:17:41.247 ************************************ 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:41.247 * Looking for test storage... 00:17:41.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:41.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.247 --rc genhtml_branch_coverage=1 00:17:41.247 --rc genhtml_function_coverage=1 00:17:41.247 --rc genhtml_legend=1 00:17:41.247 --rc geninfo_all_blocks=1 00:17:41.247 --rc geninfo_unexecuted_blocks=1 00:17:41.247 00:17:41.247 ' 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:41.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.247 --rc genhtml_branch_coverage=1 00:17:41.247 --rc genhtml_function_coverage=1 00:17:41.247 --rc genhtml_legend=1 00:17:41.247 --rc geninfo_all_blocks=1 00:17:41.247 --rc geninfo_unexecuted_blocks=1 00:17:41.247 00:17:41.247 ' 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:41.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.247 --rc genhtml_branch_coverage=1 00:17:41.247 --rc genhtml_function_coverage=1 00:17:41.247 --rc genhtml_legend=1 00:17:41.247 --rc geninfo_all_blocks=1 00:17:41.247 --rc geninfo_unexecuted_blocks=1 00:17:41.247 00:17:41.247 ' 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:41.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.247 --rc genhtml_branch_coverage=1 00:17:41.247 --rc genhtml_function_coverage=1 00:17:41.247 --rc genhtml_legend=1 00:17:41.247 --rc geninfo_all_blocks=1 00:17:41.247 --rc geninfo_unexecuted_blocks=1 00:17:41.247 00:17:41.247 ' 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.247 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:41.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:17:41.248 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.528 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:46.528 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:17:46.528 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:46.528 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:46.528 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:46.528 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:46.528 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:46.528 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:17:46.528 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:46.528 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:17:46.528 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:17:46.528 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:17:46.528 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:17:46.528 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:17:46.528 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:17:46.528 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:46.528 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:46.528 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:46.528 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:46.529 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:46.529 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:46.529 Found net devices under 0000:af:00.0: cvl_0_0 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:46.529 Found net devices under 0000:af:00.1: cvl_0_1 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:46.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.499 ms 00:17:46.529 00:17:46.529 --- 10.0.0.2 ping statistics --- 00:17:46.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.529 rtt min/avg/max/mdev = 0.499/0.499/0.499/0.000 ms 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:46.529 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.529 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:17:46.529 00:17:46.529 --- 10.0.0.1 ping statistics --- 00:17:46.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.529 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=343702 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 343702 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:46.529 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 343702 ']' 00:17:46.530 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.530 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:46.530 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.530 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:46.530 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=343980 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9faeb52b43743002cc93155a358d0374d1621186b4691200 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.6gY 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9faeb52b43743002cc93155a358d0374d1621186b4691200 0 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9faeb52b43743002cc93155a358d0374d1621186b4691200 0 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9faeb52b43743002cc93155a358d0374d1621186b4691200 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.6gY 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.6gY 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.6gY 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c0d63b656468432390f98cbafba90aba5368a830d753d0095ffd74685a2d3451 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.pnw 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c0d63b656468432390f98cbafba90aba5368a830d753d0095ffd74685a2d3451 3 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c0d63b656468432390f98cbafba90aba5368a830d753d0095ffd74685a2d3451 3 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c0d63b656468432390f98cbafba90aba5368a830d753d0095ffd74685a2d3451 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.pnw 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.pnw 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.pnw 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8c6a3cf81efe500bc53c127a55d1fb7b 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.toY 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8c6a3cf81efe500bc53c127a55d1fb7b 1 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8c6a3cf81efe500bc53c127a55d1fb7b 1 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8c6a3cf81efe500bc53c127a55d1fb7b 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.toY 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.toY 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.toY 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=dcd18597a52000fcf05936682cf911a0820c6befed27fced 00:17:47.099 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.2VJ 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key dcd18597a52000fcf05936682cf911a0820c6befed27fced 2 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 dcd18597a52000fcf05936682cf911a0820c6befed27fced 2 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=dcd18597a52000fcf05936682cf911a0820c6befed27fced 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.2VJ 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.2VJ 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.2VJ 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=abfe8d4dcea70093afb4f5bcac7af27f14177f834fa330c2 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.txi 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key abfe8d4dcea70093afb4f5bcac7af27f14177f834fa330c2 2 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 abfe8d4dcea70093afb4f5bcac7af27f14177f834fa330c2 2 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=abfe8d4dcea70093afb4f5bcac7af27f14177f834fa330c2 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.txi 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.txi 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.txi 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=28fd2787cc2a65fb20e0632bb9dc236d 00:17:47.359 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.NLa 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 28fd2787cc2a65fb20e0632bb9dc236d 1 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 28fd2787cc2a65fb20e0632bb9dc236d 1 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=28fd2787cc2a65fb20e0632bb9dc236d 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.NLa 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.NLa 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.NLa 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e7575e9f6fbe3d84a038e4a5f350f7a7591a81624dc3497afc231b0a136a657f 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Caz 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e7575e9f6fbe3d84a038e4a5f350f7a7591a81624dc3497afc231b0a136a657f 3 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e7575e9f6fbe3d84a038e4a5f350f7a7591a81624dc3497afc231b0a136a657f 3 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e7575e9f6fbe3d84a038e4a5f350f7a7591a81624dc3497afc231b0a136a657f 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Caz 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Caz 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Caz 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 343702 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 343702 ']' 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.360 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.619 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:47.619 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:47.619 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 343980 /var/tmp/host.sock 00:17:47.619 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 343980 ']' 00:17:47.619 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:47.619 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.619 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:47.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:47.619 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.619 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.880 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:47.880 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:47.880 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:17:47.880 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.880 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.880 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.880 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:47.880 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.6gY 00:17:47.880 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.880 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.880 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.880 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.6gY 00:17:47.880 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.6gY 00:17:48.139 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.pnw ]] 00:17:48.139 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pnw 00:17:48.139 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.139 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.139 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.139 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pnw 00:17:48.139 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pnw 00:17:48.139 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:48.139 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.toY 00:17:48.139 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.139 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.139 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.139 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.toY 00:17:48.139 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.toY 00:17:48.399 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.2VJ ]] 00:17:48.399 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2VJ 00:17:48.399 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.399 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.399 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.399 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2VJ 00:17:48.399 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2VJ 00:17:48.659 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:48.659 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.txi 00:17:48.659 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.659 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.659 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.659 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.txi 00:17:48.659 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.txi 00:17:48.917 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.NLa ]] 00:17:48.917 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NLa 00:17:48.917 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.917 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.917 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.917 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NLa 00:17:48.917 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NLa 00:17:48.917 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:48.917 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Caz 00:17:48.917 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.917 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.917 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.917 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Caz 00:17:48.917 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Caz 00:17:49.191 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:17:49.191 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:49.191 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:49.191 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.191 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:49.191 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:49.451 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:17:49.451 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.451 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:49.451 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:49.451 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:49.451 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.451 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.451 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.451 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.451 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.451 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.451 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.451 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.710 00:17:49.710 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.710 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.710 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.710 20:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.710 20:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.710 20:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.710 20:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.710 20:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.710 20:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.710 { 00:17:49.710 "cntlid": 1, 00:17:49.710 "qid": 0, 00:17:49.710 "state": "enabled", 00:17:49.710 "thread": "nvmf_tgt_poll_group_000", 00:17:49.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:49.710 "listen_address": { 00:17:49.710 "trtype": "TCP", 00:17:49.710 "adrfam": "IPv4", 00:17:49.710 "traddr": "10.0.0.2", 00:17:49.710 "trsvcid": "4420" 00:17:49.710 }, 00:17:49.710 "peer_address": { 00:17:49.710 "trtype": "TCP", 00:17:49.710 "adrfam": "IPv4", 00:17:49.710 "traddr": "10.0.0.1", 00:17:49.710 "trsvcid": "56812" 00:17:49.710 }, 00:17:49.710 "auth": { 00:17:49.710 "state": "completed", 00:17:49.710 "digest": "sha256", 00:17:49.710 "dhgroup": "null" 00:17:49.710 } 00:17:49.710 } 00:17:49.710 ]' 00:17:49.710 20:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.970 20:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.970 20:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.970 20:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:49.970 20:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.970 20:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.970 20:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.970 20:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.228 20:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:17:50.228 20:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:17:53.517 20:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.517 20:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:53.517 20:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.517 20:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.517 20:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.517 20:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.517 20:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:53.517 20:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:53.517 20:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:17:53.517 20:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.517 20:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:53.517 20:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:53.517 20:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:53.517 20:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.517 20:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.517 20:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.517 20:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.517 20:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.517 20:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.517 20:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.517 20:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.517 00:17:53.517 20:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.517 20:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.517 20:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.777 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.777 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.777 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.777 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.777 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.777 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.777 { 00:17:53.777 "cntlid": 3, 00:17:53.777 "qid": 0, 00:17:53.777 "state": "enabled", 00:17:53.777 "thread": "nvmf_tgt_poll_group_000", 00:17:53.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:53.777 "listen_address": { 00:17:53.777 "trtype": "TCP", 00:17:53.777 "adrfam": "IPv4", 00:17:53.777 "traddr": "10.0.0.2", 00:17:53.777 "trsvcid": "4420" 00:17:53.777 }, 00:17:53.777 "peer_address": { 00:17:53.777 "trtype": "TCP", 00:17:53.777 "adrfam": "IPv4", 00:17:53.777 "traddr": "10.0.0.1", 00:17:53.777 "trsvcid": "33590" 00:17:53.777 }, 00:17:53.777 "auth": { 00:17:53.777 "state": "completed", 00:17:53.777 "digest": "sha256", 00:17:53.777 "dhgroup": "null" 00:17:53.777 } 00:17:53.777 } 00:17:53.777 ]' 00:17:53.777 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.777 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:53.777 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.777 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:53.777 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.037 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.037 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.038 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.038 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:17:54.038 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:17:54.607 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.607 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:54.607 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.607 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.607 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.607 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.607 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:54.607 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:54.867 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:17:54.867 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.867 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:54.867 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:54.867 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:54.867 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.867 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.867 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.867 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.867 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.867 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.867 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.867 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.127 00:17:55.127 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.127 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.127 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.386 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.386 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.386 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.386 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.386 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.386 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.386 { 00:17:55.386 "cntlid": 5, 00:17:55.386 "qid": 0, 00:17:55.386 "state": "enabled", 00:17:55.386 "thread": "nvmf_tgt_poll_group_000", 00:17:55.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:55.386 "listen_address": { 00:17:55.386 "trtype": "TCP", 00:17:55.386 "adrfam": "IPv4", 00:17:55.386 "traddr": "10.0.0.2", 00:17:55.386 "trsvcid": "4420" 00:17:55.386 }, 00:17:55.386 "peer_address": { 00:17:55.386 "trtype": "TCP", 00:17:55.386 "adrfam": "IPv4", 00:17:55.386 "traddr": "10.0.0.1", 00:17:55.386 "trsvcid": "33620" 00:17:55.386 }, 00:17:55.386 "auth": { 00:17:55.386 "state": "completed", 00:17:55.386 "digest": "sha256", 00:17:55.386 "dhgroup": "null" 00:17:55.386 } 00:17:55.386 } 00:17:55.386 ]' 00:17:55.386 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.386 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:55.386 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.386 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:55.386 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.386 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.386 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.386 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.645 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:17:55.645 20:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:17:56.215 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.215 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:56.215 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.215 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.215 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.215 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.215 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:56.215 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:56.475 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:17:56.475 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.475 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:56.475 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:56.475 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:56.475 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.475 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:56.475 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.475 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.475 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.475 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:56.475 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.475 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.475 00:17:56.736 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.736 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.736 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.736 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.736 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.736 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.736 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.736 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.736 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.736 { 00:17:56.736 "cntlid": 7, 00:17:56.736 "qid": 0, 00:17:56.736 "state": "enabled", 00:17:56.736 "thread": "nvmf_tgt_poll_group_000", 00:17:56.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:56.736 "listen_address": { 00:17:56.736 "trtype": "TCP", 00:17:56.736 "adrfam": "IPv4", 00:17:56.736 "traddr": "10.0.0.2", 00:17:56.736 "trsvcid": "4420" 00:17:56.736 }, 00:17:56.736 "peer_address": { 00:17:56.736 "trtype": "TCP", 00:17:56.736 "adrfam": "IPv4", 00:17:56.736 "traddr": "10.0.0.1", 00:17:56.736 "trsvcid": "33642" 00:17:56.736 }, 00:17:56.736 "auth": { 00:17:56.736 "state": "completed", 00:17:56.736 "digest": "sha256", 00:17:56.736 "dhgroup": "null" 00:17:56.736 } 00:17:56.736 } 00:17:56.736 ]' 00:17:56.736 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.736 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:56.736 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.996 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:56.996 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.996 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.996 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.996 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.255 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:17:57.255 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:17:57.822 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.822 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:57.822 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.822 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.822 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.822 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:57.822 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.822 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:57.822 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:57.822 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:17:57.823 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.823 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:57.823 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:57.823 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:57.823 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.823 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.823 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.823 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.823 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.823 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.823 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.823 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.082 00:17:58.082 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.082 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.082 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.343 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.343 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.343 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.343 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.343 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.343 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.343 { 00:17:58.343 "cntlid": 9, 00:17:58.343 "qid": 0, 00:17:58.343 "state": "enabled", 00:17:58.343 "thread": "nvmf_tgt_poll_group_000", 00:17:58.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:58.343 "listen_address": { 00:17:58.343 "trtype": "TCP", 00:17:58.343 "adrfam": "IPv4", 00:17:58.343 "traddr": "10.0.0.2", 00:17:58.343 "trsvcid": "4420" 00:17:58.343 }, 00:17:58.343 "peer_address": { 00:17:58.343 "trtype": "TCP", 00:17:58.343 "adrfam": "IPv4", 00:17:58.343 "traddr": "10.0.0.1", 00:17:58.343 "trsvcid": "33674" 00:17:58.343 }, 00:17:58.343 "auth": { 00:17:58.343 "state": "completed", 00:17:58.343 "digest": "sha256", 00:17:58.343 "dhgroup": "ffdhe2048" 00:17:58.343 } 00:17:58.343 } 00:17:58.343 ]' 00:17:58.343 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.343 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:58.343 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.343 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:58.343 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.343 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.343 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.343 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.602 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:17:58.603 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:17:59.168 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.169 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:59.169 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.169 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.169 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.169 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.169 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:59.169 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:59.428 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:17:59.428 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.428 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:59.428 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:59.428 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:59.428 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.428 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.428 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.428 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.428 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.428 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.428 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.428 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.687 00:17:59.687 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.687 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.687 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.687 20:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.687 20:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.687 20:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.687 20:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.688 20:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.688 20:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.688 { 00:17:59.688 "cntlid": 11, 00:17:59.688 "qid": 0, 00:17:59.688 "state": "enabled", 00:17:59.688 "thread": "nvmf_tgt_poll_group_000", 00:17:59.688 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:59.688 "listen_address": { 00:17:59.688 "trtype": "TCP", 00:17:59.688 "adrfam": "IPv4", 00:17:59.688 "traddr": "10.0.0.2", 00:17:59.688 "trsvcid": "4420" 00:17:59.688 }, 00:17:59.688 "peer_address": { 00:17:59.688 "trtype": "TCP", 00:17:59.688 "adrfam": "IPv4", 00:17:59.688 "traddr": "10.0.0.1", 00:17:59.688 "trsvcid": "33700" 00:17:59.688 }, 00:17:59.688 "auth": { 00:17:59.688 "state": "completed", 00:17:59.688 "digest": "sha256", 00:17:59.688 "dhgroup": "ffdhe2048" 00:17:59.688 } 00:17:59.688 } 00:17:59.688 ]' 00:17:59.688 20:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.948 20:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.948 20:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.948 20:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:59.948 20:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.948 20:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.948 20:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.948 20:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.208 20:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:18:00.208 20:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:18:00.774 20:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.774 20:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:00.774 20:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.774 20:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.774 20:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.774 20:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.774 20:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:00.775 20:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:00.775 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:18:00.775 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.775 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:00.775 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:00.775 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:00.775 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.775 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.775 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.775 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.775 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.775 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.775 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.775 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.033 00:18:01.033 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.033 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.033 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.293 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.293 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.293 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.293 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.293 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.293 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.293 { 00:18:01.293 "cntlid": 13, 00:18:01.293 "qid": 0, 00:18:01.293 "state": "enabled", 00:18:01.293 "thread": "nvmf_tgt_poll_group_000", 00:18:01.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:01.293 "listen_address": { 00:18:01.293 "trtype": "TCP", 00:18:01.293 "adrfam": "IPv4", 00:18:01.293 "traddr": "10.0.0.2", 00:18:01.293 "trsvcid": "4420" 00:18:01.293 }, 00:18:01.293 "peer_address": { 00:18:01.293 "trtype": "TCP", 00:18:01.293 "adrfam": "IPv4", 00:18:01.293 "traddr": "10.0.0.1", 00:18:01.293 "trsvcid": "47460" 00:18:01.293 }, 00:18:01.293 "auth": { 00:18:01.293 "state": "completed", 00:18:01.293 "digest": "sha256", 00:18:01.293 "dhgroup": "ffdhe2048" 00:18:01.293 } 00:18:01.293 } 00:18:01.293 ]' 00:18:01.293 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.293 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:01.293 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.293 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:01.293 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.293 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.293 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.293 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.552 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:18:01.552 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:18:02.118 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.118 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:02.118 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.118 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.118 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.118 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.118 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:02.118 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:02.377 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:18:02.377 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.377 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:02.377 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:02.377 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:02.377 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.377 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:02.377 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.377 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.377 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.377 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:02.377 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.377 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.635 00:18:02.635 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.635 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.635 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.635 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.635 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.635 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.635 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.635 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.635 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.635 { 00:18:02.635 "cntlid": 15, 00:18:02.635 "qid": 0, 00:18:02.635 "state": "enabled", 00:18:02.635 "thread": "nvmf_tgt_poll_group_000", 00:18:02.635 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:02.635 "listen_address": { 00:18:02.635 "trtype": "TCP", 00:18:02.635 "adrfam": "IPv4", 00:18:02.635 "traddr": "10.0.0.2", 00:18:02.635 "trsvcid": "4420" 00:18:02.635 }, 00:18:02.635 "peer_address": { 00:18:02.635 "trtype": "TCP", 00:18:02.635 "adrfam": "IPv4", 00:18:02.635 "traddr": "10.0.0.1", 00:18:02.635 "trsvcid": "47500" 00:18:02.635 }, 00:18:02.635 "auth": { 00:18:02.635 "state": "completed", 00:18:02.635 "digest": "sha256", 00:18:02.635 "dhgroup": "ffdhe2048" 00:18:02.635 } 00:18:02.635 } 00:18:02.635 ]' 00:18:02.635 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.635 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:02.635 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.894 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:02.894 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.894 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.894 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.894 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.894 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:18:02.894 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:18:03.464 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.464 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:03.464 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.464 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.464 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.464 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.464 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.464 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:03.464 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:03.723 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:18:03.723 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.723 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:03.723 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:03.723 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:03.723 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.723 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.723 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.723 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.723 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.724 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.724 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.724 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.982 00:18:03.982 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.982 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.982 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.241 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.241 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.241 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.241 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.241 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.241 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.241 { 00:18:04.241 "cntlid": 17, 00:18:04.241 "qid": 0, 00:18:04.241 "state": "enabled", 00:18:04.241 "thread": "nvmf_tgt_poll_group_000", 00:18:04.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:04.241 "listen_address": { 00:18:04.241 "trtype": "TCP", 00:18:04.241 "adrfam": "IPv4", 00:18:04.241 "traddr": "10.0.0.2", 00:18:04.241 "trsvcid": "4420" 00:18:04.241 }, 00:18:04.241 "peer_address": { 00:18:04.241 "trtype": "TCP", 00:18:04.241 "adrfam": "IPv4", 00:18:04.241 "traddr": "10.0.0.1", 00:18:04.241 "trsvcid": "47524" 00:18:04.241 }, 00:18:04.241 "auth": { 00:18:04.241 "state": "completed", 00:18:04.241 "digest": "sha256", 00:18:04.241 "dhgroup": "ffdhe3072" 00:18:04.241 } 00:18:04.241 } 00:18:04.241 ]' 00:18:04.241 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.241 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:04.241 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.241 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:04.241 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.241 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.241 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.241 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.499 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:18:04.499 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:18:05.141 20:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.141 20:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:05.141 20:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.141 20:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.141 20:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.141 20:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.141 20:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:05.141 20:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:05.141 20:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:18:05.141 20:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.141 20:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:05.141 20:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:05.141 20:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:05.141 20:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.141 20:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.141 20:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.141 20:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.417 20:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.417 20:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.417 20:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.417 20:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.417 00:18:05.417 20:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.417 20:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.417 20:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.689 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.689 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.689 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.689 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.689 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.689 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.689 { 00:18:05.689 "cntlid": 19, 00:18:05.689 "qid": 0, 00:18:05.689 "state": "enabled", 00:18:05.689 "thread": "nvmf_tgt_poll_group_000", 00:18:05.689 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:05.689 "listen_address": { 00:18:05.689 "trtype": "TCP", 00:18:05.689 "adrfam": "IPv4", 00:18:05.689 "traddr": "10.0.0.2", 00:18:05.689 "trsvcid": "4420" 00:18:05.689 }, 00:18:05.689 "peer_address": { 00:18:05.689 "trtype": "TCP", 00:18:05.689 "adrfam": "IPv4", 00:18:05.689 "traddr": "10.0.0.1", 00:18:05.689 "trsvcid": "47566" 00:18:05.689 }, 00:18:05.689 "auth": { 00:18:05.689 "state": "completed", 00:18:05.689 "digest": "sha256", 00:18:05.689 "dhgroup": "ffdhe3072" 00:18:05.689 } 00:18:05.689 } 00:18:05.689 ]' 00:18:05.689 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.689 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:05.689 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.689 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:05.689 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.970 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.970 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.970 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.970 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:18:05.970 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:18:06.571 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.571 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:06.571 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.571 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.571 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.571 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.571 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:06.571 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:06.842 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:18:06.842 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.842 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:06.842 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:06.842 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:06.842 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.842 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.842 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.842 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.842 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.842 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.842 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.842 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.117 00:18:07.117 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.117 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.117 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.117 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.117 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.117 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.117 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.117 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.117 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.117 { 00:18:07.117 "cntlid": 21, 00:18:07.117 "qid": 0, 00:18:07.117 "state": "enabled", 00:18:07.117 "thread": "nvmf_tgt_poll_group_000", 00:18:07.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:07.117 "listen_address": { 00:18:07.117 "trtype": "TCP", 00:18:07.117 "adrfam": "IPv4", 00:18:07.117 "traddr": "10.0.0.2", 00:18:07.117 "trsvcid": "4420" 00:18:07.117 }, 00:18:07.117 "peer_address": { 00:18:07.117 "trtype": "TCP", 00:18:07.117 "adrfam": "IPv4", 00:18:07.117 "traddr": "10.0.0.1", 00:18:07.117 "trsvcid": "47598" 00:18:07.117 }, 00:18:07.117 "auth": { 00:18:07.117 "state": "completed", 00:18:07.117 "digest": "sha256", 00:18:07.117 "dhgroup": "ffdhe3072" 00:18:07.117 } 00:18:07.117 } 00:18:07.117 ]' 00:18:07.117 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.395 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:07.395 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.395 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:07.395 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.395 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.395 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.395 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.669 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:18:07.669 20:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:18:08.251 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.251 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:08.251 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.251 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.251 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.251 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.251 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:08.251 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:08.251 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:18:08.251 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.251 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:08.251 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:08.251 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:08.251 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.251 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:08.251 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.251 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.251 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.251 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:08.251 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.251 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.519 00:18:08.519 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.519 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.519 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.790 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.790 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.790 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.790 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.790 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.790 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.790 { 00:18:08.790 "cntlid": 23, 00:18:08.790 "qid": 0, 00:18:08.790 "state": "enabled", 00:18:08.790 "thread": "nvmf_tgt_poll_group_000", 00:18:08.790 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:08.790 "listen_address": { 00:18:08.790 "trtype": "TCP", 00:18:08.790 "adrfam": "IPv4", 00:18:08.790 "traddr": "10.0.0.2", 00:18:08.790 "trsvcid": "4420" 00:18:08.790 }, 00:18:08.790 "peer_address": { 00:18:08.790 "trtype": "TCP", 00:18:08.790 "adrfam": "IPv4", 00:18:08.790 "traddr": "10.0.0.1", 00:18:08.790 "trsvcid": "47640" 00:18:08.791 }, 00:18:08.791 "auth": { 00:18:08.791 "state": "completed", 00:18:08.791 "digest": "sha256", 00:18:08.791 "dhgroup": "ffdhe3072" 00:18:08.791 } 00:18:08.791 } 00:18:08.791 ]' 00:18:08.791 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.791 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.791 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.791 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:08.791 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.791 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.791 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.791 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.063 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:18:09.063 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:18:09.649 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.649 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:09.649 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.649 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.649 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.649 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:09.649 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.649 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:09.649 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:09.926 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:18:09.926 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.926 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:09.926 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:09.926 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:09.926 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.926 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.926 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.926 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.926 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.926 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.926 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.926 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.220 00:18:10.220 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.220 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.220 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.220 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.220 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.220 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.220 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.220 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.220 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.220 { 00:18:10.220 "cntlid": 25, 00:18:10.220 "qid": 0, 00:18:10.220 "state": "enabled", 00:18:10.220 "thread": "nvmf_tgt_poll_group_000", 00:18:10.220 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:10.220 "listen_address": { 00:18:10.220 "trtype": "TCP", 00:18:10.220 "adrfam": "IPv4", 00:18:10.220 "traddr": "10.0.0.2", 00:18:10.220 "trsvcid": "4420" 00:18:10.220 }, 00:18:10.220 "peer_address": { 00:18:10.220 "trtype": "TCP", 00:18:10.220 "adrfam": "IPv4", 00:18:10.220 "traddr": "10.0.0.1", 00:18:10.220 "trsvcid": "33394" 00:18:10.220 }, 00:18:10.220 "auth": { 00:18:10.220 "state": "completed", 00:18:10.220 "digest": "sha256", 00:18:10.220 "dhgroup": "ffdhe4096" 00:18:10.220 } 00:18:10.220 } 00:18:10.220 ]' 00:18:10.220 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.220 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:10.220 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.508 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:10.508 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.508 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.508 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.508 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.508 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:18:10.508 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:18:11.095 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.095 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:11.095 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.095 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.095 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.096 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.096 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:11.096 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:11.363 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:18:11.363 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.363 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:11.363 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:11.363 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:11.363 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.363 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.363 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.363 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.363 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.363 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.363 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.363 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.642 00:18:11.642 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.642 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.643 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.643 20:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.915 20:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.915 20:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.915 20:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.915 20:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.915 20:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.915 { 00:18:11.915 "cntlid": 27, 00:18:11.915 "qid": 0, 00:18:11.915 "state": "enabled", 00:18:11.915 "thread": "nvmf_tgt_poll_group_000", 00:18:11.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:11.915 "listen_address": { 00:18:11.915 "trtype": "TCP", 00:18:11.915 "adrfam": "IPv4", 00:18:11.915 "traddr": "10.0.0.2", 00:18:11.915 "trsvcid": "4420" 00:18:11.915 }, 00:18:11.915 "peer_address": { 00:18:11.915 "trtype": "TCP", 00:18:11.915 "adrfam": "IPv4", 00:18:11.915 "traddr": "10.0.0.1", 00:18:11.915 "trsvcid": "33416" 00:18:11.915 }, 00:18:11.915 "auth": { 00:18:11.915 "state": "completed", 00:18:11.915 "digest": "sha256", 00:18:11.915 "dhgroup": "ffdhe4096" 00:18:11.915 } 00:18:11.915 } 00:18:11.915 ]' 00:18:11.915 20:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.915 20:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:11.915 20:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.915 20:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:11.915 20:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.915 20:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.915 20:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.915 20:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.192 20:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:18:12.192 20:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:18:12.780 20:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.780 20:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:12.780 20:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.780 20:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.780 20:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.780 20:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.780 20:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:12.780 20:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:12.780 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:18:12.780 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.780 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:12.780 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:12.780 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:12.780 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.780 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.780 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.780 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.780 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.780 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.780 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.780 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.071 00:18:13.071 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.071 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.071 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.342 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.342 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.342 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.342 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.342 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.342 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.342 { 00:18:13.342 "cntlid": 29, 00:18:13.342 "qid": 0, 00:18:13.342 "state": "enabled", 00:18:13.342 "thread": "nvmf_tgt_poll_group_000", 00:18:13.342 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:13.342 "listen_address": { 00:18:13.342 "trtype": "TCP", 00:18:13.342 "adrfam": "IPv4", 00:18:13.342 "traddr": "10.0.0.2", 00:18:13.342 "trsvcid": "4420" 00:18:13.342 }, 00:18:13.342 "peer_address": { 00:18:13.342 "trtype": "TCP", 00:18:13.342 "adrfam": "IPv4", 00:18:13.342 "traddr": "10.0.0.1", 00:18:13.342 "trsvcid": "33446" 00:18:13.342 }, 00:18:13.342 "auth": { 00:18:13.342 "state": "completed", 00:18:13.342 "digest": "sha256", 00:18:13.342 "dhgroup": "ffdhe4096" 00:18:13.342 } 00:18:13.342 } 00:18:13.342 ]' 00:18:13.342 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.342 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:13.343 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.343 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:13.343 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.343 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.343 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.343 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.662 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:18:13.662 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:18:14.300 20:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.300 20:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:14.300 20:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.300 20:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.300 20:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.300 20:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.300 20:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:14.300 20:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:14.300 20:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:18:14.300 20:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.300 20:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:14.300 20:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:14.301 20:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:14.301 20:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.301 20:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:14.301 20:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.301 20:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.301 20:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.301 20:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:14.301 20:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.301 20:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.578 00:18:14.578 20:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.578 20:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.578 20:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.859 20:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.859 20:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.859 20:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.859 20:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.859 20:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.859 20:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.859 { 00:18:14.859 "cntlid": 31, 00:18:14.859 "qid": 0, 00:18:14.859 "state": "enabled", 00:18:14.859 "thread": "nvmf_tgt_poll_group_000", 00:18:14.859 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:14.859 "listen_address": { 00:18:14.859 "trtype": "TCP", 00:18:14.859 "adrfam": "IPv4", 00:18:14.859 "traddr": "10.0.0.2", 00:18:14.859 "trsvcid": "4420" 00:18:14.859 }, 00:18:14.859 "peer_address": { 00:18:14.859 "trtype": "TCP", 00:18:14.859 "adrfam": "IPv4", 00:18:14.859 "traddr": "10.0.0.1", 00:18:14.859 "trsvcid": "33482" 00:18:14.859 }, 00:18:14.859 "auth": { 00:18:14.859 "state": "completed", 00:18:14.859 "digest": "sha256", 00:18:14.859 "dhgroup": "ffdhe4096" 00:18:14.859 } 00:18:14.859 } 00:18:14.859 ]' 00:18:14.859 20:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.859 20:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:14.859 20:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.859 20:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:14.859 20:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.859 20:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.859 20:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.859 20:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.141 20:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:18:15.141 20:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:18:15.771 20:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.771 20:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:15.771 20:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.771 20:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.771 20:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.771 20:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:15.771 20:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.771 20:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:15.771 20:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:15.771 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:18:15.771 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.771 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:15.771 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:15.771 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:15.771 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.771 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.771 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.771 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.771 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.771 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.771 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.771 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.065 00:18:16.065 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.065 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.065 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.356 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.356 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.356 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.356 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.356 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.356 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.356 { 00:18:16.356 "cntlid": 33, 00:18:16.356 "qid": 0, 00:18:16.356 "state": "enabled", 00:18:16.357 "thread": "nvmf_tgt_poll_group_000", 00:18:16.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:16.357 "listen_address": { 00:18:16.357 "trtype": "TCP", 00:18:16.357 "adrfam": "IPv4", 00:18:16.357 "traddr": "10.0.0.2", 00:18:16.357 "trsvcid": "4420" 00:18:16.357 }, 00:18:16.357 "peer_address": { 00:18:16.357 "trtype": "TCP", 00:18:16.357 "adrfam": "IPv4", 00:18:16.357 "traddr": "10.0.0.1", 00:18:16.357 "trsvcid": "33502" 00:18:16.357 }, 00:18:16.357 "auth": { 00:18:16.357 "state": "completed", 00:18:16.357 "digest": "sha256", 00:18:16.357 "dhgroup": "ffdhe6144" 00:18:16.357 } 00:18:16.357 } 00:18:16.357 ]' 00:18:16.357 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.357 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:16.357 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.357 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:16.357 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.357 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.357 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.357 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.634 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:18:16.634 20:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:18:17.227 20:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.227 20:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:17.227 20:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.227 20:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.227 20:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.227 20:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.227 20:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:17.227 20:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:17.498 20:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:18:17.498 20:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.498 20:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:17.498 20:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:17.498 20:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:17.498 20:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.498 20:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.498 20:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.498 20:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.498 20:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.498 20:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.498 20:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.498 20:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.776 00:18:17.776 20:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.776 20:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.776 20:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.057 20:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.057 20:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.057 20:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.057 20:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.057 20:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.057 20:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.057 { 00:18:18.057 "cntlid": 35, 00:18:18.057 "qid": 0, 00:18:18.057 "state": "enabled", 00:18:18.057 "thread": "nvmf_tgt_poll_group_000", 00:18:18.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:18.057 "listen_address": { 00:18:18.057 "trtype": "TCP", 00:18:18.057 "adrfam": "IPv4", 00:18:18.057 "traddr": "10.0.0.2", 00:18:18.057 "trsvcid": "4420" 00:18:18.057 }, 00:18:18.057 "peer_address": { 00:18:18.057 "trtype": "TCP", 00:18:18.057 "adrfam": "IPv4", 00:18:18.057 "traddr": "10.0.0.1", 00:18:18.057 "trsvcid": "33522" 00:18:18.057 }, 00:18:18.057 "auth": { 00:18:18.057 "state": "completed", 00:18:18.057 "digest": "sha256", 00:18:18.057 "dhgroup": "ffdhe6144" 00:18:18.057 } 00:18:18.057 } 00:18:18.057 ]' 00:18:18.057 20:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.057 20:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:18.057 20:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.057 20:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:18.057 20:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.057 20:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.057 20:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.057 20:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.358 20:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:18:18.359 20:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:18:18.637 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.637 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:18.637 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.637 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.904 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.904 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.904 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:18.904 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:18.904 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:18:18.904 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.904 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:18.904 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:18.904 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:18.904 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.904 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.904 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.904 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.904 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.904 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.904 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.904 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.176 00:18:19.465 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.465 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.465 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.465 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.465 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.465 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.465 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.465 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.465 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.465 { 00:18:19.465 "cntlid": 37, 00:18:19.465 "qid": 0, 00:18:19.465 "state": "enabled", 00:18:19.465 "thread": "nvmf_tgt_poll_group_000", 00:18:19.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:19.465 "listen_address": { 00:18:19.465 "trtype": "TCP", 00:18:19.465 "adrfam": "IPv4", 00:18:19.465 "traddr": "10.0.0.2", 00:18:19.465 "trsvcid": "4420" 00:18:19.465 }, 00:18:19.465 "peer_address": { 00:18:19.465 "trtype": "TCP", 00:18:19.465 "adrfam": "IPv4", 00:18:19.465 "traddr": "10.0.0.1", 00:18:19.465 "trsvcid": "33548" 00:18:19.465 }, 00:18:19.465 "auth": { 00:18:19.465 "state": "completed", 00:18:19.465 "digest": "sha256", 00:18:19.465 "dhgroup": "ffdhe6144" 00:18:19.465 } 00:18:19.465 } 00:18:19.465 ]' 00:18:19.465 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.465 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:19.465 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.744 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:19.744 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.744 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.744 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.744 20:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.744 20:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:18:19.744 20:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:18:20.349 20:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.349 20:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:20.349 20:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.349 20:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.349 20:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.349 20:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.349 20:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:20.349 20:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:20.629 20:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:18:20.629 20:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.629 20:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:20.629 20:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:20.629 20:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:20.629 20:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.629 20:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:20.629 20:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.629 20:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.629 20:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.629 20:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:20.629 20:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:20.629 20:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:20.909 00:18:20.909 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.909 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.909 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.189 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.189 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.189 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.189 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.189 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.189 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.189 { 00:18:21.189 "cntlid": 39, 00:18:21.189 "qid": 0, 00:18:21.189 "state": "enabled", 00:18:21.189 "thread": "nvmf_tgt_poll_group_000", 00:18:21.189 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:21.189 "listen_address": { 00:18:21.189 "trtype": "TCP", 00:18:21.189 "adrfam": "IPv4", 00:18:21.189 "traddr": "10.0.0.2", 00:18:21.189 "trsvcid": "4420" 00:18:21.189 }, 00:18:21.189 "peer_address": { 00:18:21.189 "trtype": "TCP", 00:18:21.189 "adrfam": "IPv4", 00:18:21.189 "traddr": "10.0.0.1", 00:18:21.189 "trsvcid": "47366" 00:18:21.189 }, 00:18:21.189 "auth": { 00:18:21.189 "state": "completed", 00:18:21.189 "digest": "sha256", 00:18:21.189 "dhgroup": "ffdhe6144" 00:18:21.189 } 00:18:21.189 } 00:18:21.189 ]' 00:18:21.189 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.189 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:21.189 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.189 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:21.190 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.190 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.190 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.190 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.542 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:18:21.542 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:18:21.863 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.863 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:21.863 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.863 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.863 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.863 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:21.863 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.863 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:21.863 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:22.156 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:18:22.156 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.156 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:22.156 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:22.156 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:22.156 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.156 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.156 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.156 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.156 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.156 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.156 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.156 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.759 00:18:22.759 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.759 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.759 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.759 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.759 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.759 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.759 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.759 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.759 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.759 { 00:18:22.759 "cntlid": 41, 00:18:22.759 "qid": 0, 00:18:22.759 "state": "enabled", 00:18:22.759 "thread": "nvmf_tgt_poll_group_000", 00:18:22.759 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:22.759 "listen_address": { 00:18:22.759 "trtype": "TCP", 00:18:22.759 "adrfam": "IPv4", 00:18:22.759 "traddr": "10.0.0.2", 00:18:22.759 "trsvcid": "4420" 00:18:22.759 }, 00:18:22.759 "peer_address": { 00:18:22.759 "trtype": "TCP", 00:18:22.759 "adrfam": "IPv4", 00:18:22.759 "traddr": "10.0.0.1", 00:18:22.759 "trsvcid": "47398" 00:18:22.759 }, 00:18:22.759 "auth": { 00:18:22.759 "state": "completed", 00:18:22.759 "digest": "sha256", 00:18:22.759 "dhgroup": "ffdhe8192" 00:18:22.759 } 00:18:22.759 } 00:18:22.759 ]' 00:18:22.759 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.759 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:22.759 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.028 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:23.028 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.028 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.028 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.028 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.028 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:18:23.028 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:18:23.655 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.655 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:23.655 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.655 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.655 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.655 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.655 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:23.655 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:23.935 20:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:18:23.935 20:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.935 20:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:23.935 20:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:23.935 20:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:23.935 20:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.935 20:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.935 20:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.935 20:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.935 20:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.935 20:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.935 20:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.935 20:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.199 00:18:24.199 20:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.199 20:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.199 20:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.459 20:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.459 20:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.459 20:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.459 20:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.459 20:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.459 20:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.459 { 00:18:24.459 "cntlid": 43, 00:18:24.459 "qid": 0, 00:18:24.459 "state": "enabled", 00:18:24.459 "thread": "nvmf_tgt_poll_group_000", 00:18:24.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:24.459 "listen_address": { 00:18:24.459 "trtype": "TCP", 00:18:24.459 "adrfam": "IPv4", 00:18:24.459 "traddr": "10.0.0.2", 00:18:24.459 "trsvcid": "4420" 00:18:24.459 }, 00:18:24.459 "peer_address": { 00:18:24.459 "trtype": "TCP", 00:18:24.459 "adrfam": "IPv4", 00:18:24.459 "traddr": "10.0.0.1", 00:18:24.459 "trsvcid": "47420" 00:18:24.459 }, 00:18:24.459 "auth": { 00:18:24.459 "state": "completed", 00:18:24.459 "digest": "sha256", 00:18:24.459 "dhgroup": "ffdhe8192" 00:18:24.459 } 00:18:24.459 } 00:18:24.459 ]' 00:18:24.459 20:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.459 20:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:24.459 20:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.719 20:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:24.719 20:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.719 20:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.719 20:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.719 20:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.719 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:18:24.719 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:18:25.288 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.288 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:25.288 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.288 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.288 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.288 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.288 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:25.288 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:25.547 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:18:25.547 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:25.547 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:25.547 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:25.547 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:25.547 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.547 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.548 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.548 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.548 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.548 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.548 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.548 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.117 00:18:26.117 20:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.117 20:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.117 20:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.117 20:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.117 20:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.117 20:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.117 20:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.117 20:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.117 20:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.117 { 00:18:26.117 "cntlid": 45, 00:18:26.117 "qid": 0, 00:18:26.117 "state": "enabled", 00:18:26.117 "thread": "nvmf_tgt_poll_group_000", 00:18:26.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:26.117 "listen_address": { 00:18:26.117 "trtype": "TCP", 00:18:26.117 "adrfam": "IPv4", 00:18:26.117 "traddr": "10.0.0.2", 00:18:26.117 "trsvcid": "4420" 00:18:26.117 }, 00:18:26.117 "peer_address": { 00:18:26.117 "trtype": "TCP", 00:18:26.117 "adrfam": "IPv4", 00:18:26.117 "traddr": "10.0.0.1", 00:18:26.117 "trsvcid": "47448" 00:18:26.117 }, 00:18:26.117 "auth": { 00:18:26.117 "state": "completed", 00:18:26.117 "digest": "sha256", 00:18:26.117 "dhgroup": "ffdhe8192" 00:18:26.117 } 00:18:26.117 } 00:18:26.117 ]' 00:18:26.117 20:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.377 20:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:26.377 20:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.377 20:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:26.377 20:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.377 20:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.377 20:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.377 20:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.637 20:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:18:26.637 20:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:18:27.206 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.206 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:27.206 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.206 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.206 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.206 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.206 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:27.206 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:27.206 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:18:27.206 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.206 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:27.206 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:27.206 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:27.206 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.206 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:27.206 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.206 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.206 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.206 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:27.206 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.206 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.776 00:18:27.776 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.776 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.776 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.776 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.037 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.037 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.037 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.037 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.037 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.037 { 00:18:28.037 "cntlid": 47, 00:18:28.037 "qid": 0, 00:18:28.037 "state": "enabled", 00:18:28.037 "thread": "nvmf_tgt_poll_group_000", 00:18:28.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:28.037 "listen_address": { 00:18:28.037 "trtype": "TCP", 00:18:28.037 "adrfam": "IPv4", 00:18:28.037 "traddr": "10.0.0.2", 00:18:28.037 "trsvcid": "4420" 00:18:28.037 }, 00:18:28.037 "peer_address": { 00:18:28.037 "trtype": "TCP", 00:18:28.037 "adrfam": "IPv4", 00:18:28.037 "traddr": "10.0.0.1", 00:18:28.037 "trsvcid": "47478" 00:18:28.037 }, 00:18:28.037 "auth": { 00:18:28.037 "state": "completed", 00:18:28.037 "digest": "sha256", 00:18:28.037 "dhgroup": "ffdhe8192" 00:18:28.037 } 00:18:28.037 } 00:18:28.037 ]' 00:18:28.037 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.037 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:28.037 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.037 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:28.037 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.037 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.037 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.037 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.296 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:18:28.296 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:18:28.867 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.867 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:28.867 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.867 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.867 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.867 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:28.867 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:28.867 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.867 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:28.867 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:28.867 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:18:28.867 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.867 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:28.867 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:28.867 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:28.867 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.867 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.867 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.867 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.867 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.867 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.867 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.867 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.127 00:18:29.127 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:29.127 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:29.127 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.386 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.386 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.386 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.386 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.386 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.386 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.386 { 00:18:29.386 "cntlid": 49, 00:18:29.386 "qid": 0, 00:18:29.386 "state": "enabled", 00:18:29.386 "thread": "nvmf_tgt_poll_group_000", 00:18:29.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:29.386 "listen_address": { 00:18:29.386 "trtype": "TCP", 00:18:29.386 "adrfam": "IPv4", 00:18:29.386 "traddr": "10.0.0.2", 00:18:29.386 "trsvcid": "4420" 00:18:29.386 }, 00:18:29.386 "peer_address": { 00:18:29.386 "trtype": "TCP", 00:18:29.386 "adrfam": "IPv4", 00:18:29.386 "traddr": "10.0.0.1", 00:18:29.386 "trsvcid": "47512" 00:18:29.386 }, 00:18:29.386 "auth": { 00:18:29.386 "state": "completed", 00:18:29.386 "digest": "sha384", 00:18:29.386 "dhgroup": "null" 00:18:29.386 } 00:18:29.386 } 00:18:29.386 ]' 00:18:29.386 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:29.386 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.386 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:29.386 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:29.386 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:29.386 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.386 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.386 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.646 20:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:18:29.646 20:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:18:30.215 20:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.215 20:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:30.216 20:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.216 20:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.216 20:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.216 20:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:30.216 20:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:30.216 20:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:30.476 20:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:18:30.476 20:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.476 20:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:30.476 20:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:30.476 20:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:30.476 20:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.476 20:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.476 20:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.476 20:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.476 20:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.476 20:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.476 20:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.476 20:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.736 00:18:30.736 20:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.736 20:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.736 20:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.736 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.736 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.736 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.736 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.736 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.736 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.736 { 00:18:30.736 "cntlid": 51, 00:18:30.736 "qid": 0, 00:18:30.736 "state": "enabled", 00:18:30.736 "thread": "nvmf_tgt_poll_group_000", 00:18:30.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:30.736 "listen_address": { 00:18:30.736 "trtype": "TCP", 00:18:30.736 "adrfam": "IPv4", 00:18:30.736 "traddr": "10.0.0.2", 00:18:30.736 "trsvcid": "4420" 00:18:30.736 }, 00:18:30.736 "peer_address": { 00:18:30.736 "trtype": "TCP", 00:18:30.736 "adrfam": "IPv4", 00:18:30.736 "traddr": "10.0.0.1", 00:18:30.736 "trsvcid": "54198" 00:18:30.736 }, 00:18:30.736 "auth": { 00:18:30.736 "state": "completed", 00:18:30.736 "digest": "sha384", 00:18:30.736 "dhgroup": "null" 00:18:30.736 } 00:18:30.736 } 00:18:30.736 ]' 00:18:30.736 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.996 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:30.996 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.996 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:30.996 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.996 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.996 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.996 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.255 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:18:31.255 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:18:31.824 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.824 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:31.824 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.824 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.824 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.824 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.824 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:31.824 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:31.824 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:18:31.824 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.824 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:31.824 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:31.824 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:31.824 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.824 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.824 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.824 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.824 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.824 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.824 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.824 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.083 00:18:32.083 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:32.083 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:32.083 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.342 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.342 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.342 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.342 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.342 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.342 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:32.342 { 00:18:32.342 "cntlid": 53, 00:18:32.342 "qid": 0, 00:18:32.342 "state": "enabled", 00:18:32.342 "thread": "nvmf_tgt_poll_group_000", 00:18:32.342 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:32.342 "listen_address": { 00:18:32.342 "trtype": "TCP", 00:18:32.342 "adrfam": "IPv4", 00:18:32.342 "traddr": "10.0.0.2", 00:18:32.342 "trsvcid": "4420" 00:18:32.342 }, 00:18:32.342 "peer_address": { 00:18:32.342 "trtype": "TCP", 00:18:32.342 "adrfam": "IPv4", 00:18:32.342 "traddr": "10.0.0.1", 00:18:32.342 "trsvcid": "54212" 00:18:32.342 }, 00:18:32.342 "auth": { 00:18:32.342 "state": "completed", 00:18:32.342 "digest": "sha384", 00:18:32.342 "dhgroup": "null" 00:18:32.342 } 00:18:32.342 } 00:18:32.342 ]' 00:18:32.342 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:32.342 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:32.342 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:32.342 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:32.342 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.601 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.601 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.601 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.601 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:18:32.601 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:18:33.170 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.170 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:33.170 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.170 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.170 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.170 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:33.170 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:33.170 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:33.429 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:18:33.429 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.429 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:33.429 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:33.429 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:33.429 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.429 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:33.429 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.429 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.429 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.429 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:33.429 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:33.429 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:33.688 00:18:33.688 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.688 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.688 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.688 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.688 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.688 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.688 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.688 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.688 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.688 { 00:18:33.688 "cntlid": 55, 00:18:33.688 "qid": 0, 00:18:33.688 "state": "enabled", 00:18:33.688 "thread": "nvmf_tgt_poll_group_000", 00:18:33.688 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:33.688 "listen_address": { 00:18:33.688 "trtype": "TCP", 00:18:33.688 "adrfam": "IPv4", 00:18:33.688 "traddr": "10.0.0.2", 00:18:33.688 "trsvcid": "4420" 00:18:33.688 }, 00:18:33.688 "peer_address": { 00:18:33.688 "trtype": "TCP", 00:18:33.688 "adrfam": "IPv4", 00:18:33.688 "traddr": "10.0.0.1", 00:18:33.688 "trsvcid": "54250" 00:18:33.688 }, 00:18:33.688 "auth": { 00:18:33.688 "state": "completed", 00:18:33.688 "digest": "sha384", 00:18:33.688 "dhgroup": "null" 00:18:33.688 } 00:18:33.688 } 00:18:33.688 ]' 00:18:33.688 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.948 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.948 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.948 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:33.948 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.948 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.948 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.948 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.207 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:18:34.207 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:18:34.777 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.777 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:34.777 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.777 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.777 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.777 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:34.777 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.777 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:34.777 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:34.777 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:18:34.777 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.777 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:34.777 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:34.777 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:34.777 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.777 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.777 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.777 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.777 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.777 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.777 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.777 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.037 00:18:35.037 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.037 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.037 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.297 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.297 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.297 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.297 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.297 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.297 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.297 { 00:18:35.297 "cntlid": 57, 00:18:35.297 "qid": 0, 00:18:35.297 "state": "enabled", 00:18:35.297 "thread": "nvmf_tgt_poll_group_000", 00:18:35.297 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:35.297 "listen_address": { 00:18:35.297 "trtype": "TCP", 00:18:35.297 "adrfam": "IPv4", 00:18:35.297 "traddr": "10.0.0.2", 00:18:35.297 "trsvcid": "4420" 00:18:35.297 }, 00:18:35.297 "peer_address": { 00:18:35.297 "trtype": "TCP", 00:18:35.297 "adrfam": "IPv4", 00:18:35.297 "traddr": "10.0.0.1", 00:18:35.297 "trsvcid": "54274" 00:18:35.297 }, 00:18:35.297 "auth": { 00:18:35.297 "state": "completed", 00:18:35.297 "digest": "sha384", 00:18:35.297 "dhgroup": "ffdhe2048" 00:18:35.297 } 00:18:35.297 } 00:18:35.297 ]' 00:18:35.297 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.297 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:35.297 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.297 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:35.297 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.297 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.297 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.297 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.556 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:18:35.556 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:18:36.133 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.133 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:36.133 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.133 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.133 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.133 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:36.133 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:36.133 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:36.393 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:18:36.393 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.393 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:36.393 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:36.393 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:36.393 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.393 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.393 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.393 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.393 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.393 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.393 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.393 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.653 00:18:36.653 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.653 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.653 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.653 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.653 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.653 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.653 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.653 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.653 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.653 { 00:18:36.653 "cntlid": 59, 00:18:36.653 "qid": 0, 00:18:36.653 "state": "enabled", 00:18:36.653 "thread": "nvmf_tgt_poll_group_000", 00:18:36.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:36.653 "listen_address": { 00:18:36.653 "trtype": "TCP", 00:18:36.653 "adrfam": "IPv4", 00:18:36.653 "traddr": "10.0.0.2", 00:18:36.653 "trsvcid": "4420" 00:18:36.653 }, 00:18:36.653 "peer_address": { 00:18:36.653 "trtype": "TCP", 00:18:36.653 "adrfam": "IPv4", 00:18:36.653 "traddr": "10.0.0.1", 00:18:36.653 "trsvcid": "54300" 00:18:36.653 }, 00:18:36.653 "auth": { 00:18:36.653 "state": "completed", 00:18:36.653 "digest": "sha384", 00:18:36.653 "dhgroup": "ffdhe2048" 00:18:36.653 } 00:18:36.653 } 00:18:36.653 ]' 00:18:36.653 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.912 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.912 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.912 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:36.912 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.912 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.912 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.912 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.171 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:18:37.171 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:18:37.742 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.742 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:37.742 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.742 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.742 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.742 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.742 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:37.742 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:37.742 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:18:37.742 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:37.742 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:37.742 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:37.742 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:37.742 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.742 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.742 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.742 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.742 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.742 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.742 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.742 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.001 00:18:38.001 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.001 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.001 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.269 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.269 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.269 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.269 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.269 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.269 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.269 { 00:18:38.269 "cntlid": 61, 00:18:38.269 "qid": 0, 00:18:38.269 "state": "enabled", 00:18:38.269 "thread": "nvmf_tgt_poll_group_000", 00:18:38.269 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:38.269 "listen_address": { 00:18:38.269 "trtype": "TCP", 00:18:38.269 "adrfam": "IPv4", 00:18:38.269 "traddr": "10.0.0.2", 00:18:38.269 "trsvcid": "4420" 00:18:38.269 }, 00:18:38.269 "peer_address": { 00:18:38.269 "trtype": "TCP", 00:18:38.269 "adrfam": "IPv4", 00:18:38.269 "traddr": "10.0.0.1", 00:18:38.269 "trsvcid": "54330" 00:18:38.269 }, 00:18:38.269 "auth": { 00:18:38.269 "state": "completed", 00:18:38.269 "digest": "sha384", 00:18:38.269 "dhgroup": "ffdhe2048" 00:18:38.269 } 00:18:38.269 } 00:18:38.269 ]' 00:18:38.269 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.269 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:38.269 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.269 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:38.269 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.529 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.529 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.529 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.529 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:18:38.529 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:18:39.098 20:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.098 20:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:39.098 20:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.098 20:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.098 20:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.098 20:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.098 20:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:39.098 20:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:39.358 20:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:18:39.358 20:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.358 20:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:39.358 20:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:39.358 20:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:39.358 20:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.358 20:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:39.358 20:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.358 20:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.358 20:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.358 20:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:39.358 20:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:39.358 20:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:39.618 00:18:39.618 20:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.618 20:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.618 20:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.879 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.879 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.879 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.879 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.879 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.879 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:39.879 { 00:18:39.879 "cntlid": 63, 00:18:39.879 "qid": 0, 00:18:39.879 "state": "enabled", 00:18:39.879 "thread": "nvmf_tgt_poll_group_000", 00:18:39.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:39.879 "listen_address": { 00:18:39.879 "trtype": "TCP", 00:18:39.879 "adrfam": "IPv4", 00:18:39.879 "traddr": "10.0.0.2", 00:18:39.879 "trsvcid": "4420" 00:18:39.879 }, 00:18:39.879 "peer_address": { 00:18:39.879 "trtype": "TCP", 00:18:39.879 "adrfam": "IPv4", 00:18:39.879 "traddr": "10.0.0.1", 00:18:39.879 "trsvcid": "54370" 00:18:39.879 }, 00:18:39.879 "auth": { 00:18:39.879 "state": "completed", 00:18:39.879 "digest": "sha384", 00:18:39.879 "dhgroup": "ffdhe2048" 00:18:39.879 } 00:18:39.879 } 00:18:39.879 ]' 00:18:39.879 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:39.879 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.879 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:39.879 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:39.879 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:39.879 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.879 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.879 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.139 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:18:40.139 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:18:40.708 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.709 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:40.709 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.709 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.709 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.709 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:40.709 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:40.709 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:40.709 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:40.709 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:18:40.709 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:40.709 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:40.709 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:40.709 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:40.709 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.709 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.709 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.709 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.709 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.709 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.709 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.709 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.969 00:18:40.969 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:40.969 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:40.969 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.229 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.229 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.229 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.229 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.229 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.229 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:41.229 { 00:18:41.229 "cntlid": 65, 00:18:41.229 "qid": 0, 00:18:41.229 "state": "enabled", 00:18:41.229 "thread": "nvmf_tgt_poll_group_000", 00:18:41.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:41.229 "listen_address": { 00:18:41.229 "trtype": "TCP", 00:18:41.229 "adrfam": "IPv4", 00:18:41.229 "traddr": "10.0.0.2", 00:18:41.229 "trsvcid": "4420" 00:18:41.229 }, 00:18:41.229 "peer_address": { 00:18:41.229 "trtype": "TCP", 00:18:41.229 "adrfam": "IPv4", 00:18:41.229 "traddr": "10.0.0.1", 00:18:41.229 "trsvcid": "50624" 00:18:41.229 }, 00:18:41.229 "auth": { 00:18:41.229 "state": "completed", 00:18:41.229 "digest": "sha384", 00:18:41.229 "dhgroup": "ffdhe3072" 00:18:41.229 } 00:18:41.229 } 00:18:41.229 ]' 00:18:41.229 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.229 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:41.229 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:41.229 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:41.229 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:41.489 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.489 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.489 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.489 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:18:41.489 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:18:42.058 20:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.058 20:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:42.058 20:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.058 20:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.058 20:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.058 20:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:42.058 20:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:42.058 20:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:42.317 20:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:18:42.317 20:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.317 20:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:42.317 20:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:42.317 20:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:42.317 20:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.317 20:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.317 20:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.317 20:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.317 20:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.317 20:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.317 20:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.317 20:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.576 00:18:42.576 20:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.576 20:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.576 20:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.576 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.576 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.576 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.576 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.836 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.836 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:42.836 { 00:18:42.836 "cntlid": 67, 00:18:42.836 "qid": 0, 00:18:42.836 "state": "enabled", 00:18:42.836 "thread": "nvmf_tgt_poll_group_000", 00:18:42.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:42.836 "listen_address": { 00:18:42.836 "trtype": "TCP", 00:18:42.836 "adrfam": "IPv4", 00:18:42.836 "traddr": "10.0.0.2", 00:18:42.836 "trsvcid": "4420" 00:18:42.836 }, 00:18:42.836 "peer_address": { 00:18:42.836 "trtype": "TCP", 00:18:42.836 "adrfam": "IPv4", 00:18:42.836 "traddr": "10.0.0.1", 00:18:42.836 "trsvcid": "50634" 00:18:42.836 }, 00:18:42.836 "auth": { 00:18:42.836 "state": "completed", 00:18:42.836 "digest": "sha384", 00:18:42.836 "dhgroup": "ffdhe3072" 00:18:42.836 } 00:18:42.836 } 00:18:42.836 ]' 00:18:42.836 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:42.836 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:42.836 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:42.836 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:42.836 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:42.836 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.836 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.836 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.095 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:18:43.095 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:18:43.664 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.664 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:43.664 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.664 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.664 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.664 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:43.664 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:43.664 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:43.664 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:18:43.664 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:43.664 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:43.664 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:43.664 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:43.664 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.664 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.664 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.664 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.664 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.664 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.664 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.664 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.923 00:18:43.923 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.923 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.923 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:44.182 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.182 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.182 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.183 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.183 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.183 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:44.183 { 00:18:44.183 "cntlid": 69, 00:18:44.183 "qid": 0, 00:18:44.183 "state": "enabled", 00:18:44.183 "thread": "nvmf_tgt_poll_group_000", 00:18:44.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:44.183 "listen_address": { 00:18:44.183 "trtype": "TCP", 00:18:44.183 "adrfam": "IPv4", 00:18:44.183 "traddr": "10.0.0.2", 00:18:44.183 "trsvcid": "4420" 00:18:44.183 }, 00:18:44.183 "peer_address": { 00:18:44.183 "trtype": "TCP", 00:18:44.183 "adrfam": "IPv4", 00:18:44.183 "traddr": "10.0.0.1", 00:18:44.183 "trsvcid": "50668" 00:18:44.183 }, 00:18:44.183 "auth": { 00:18:44.183 "state": "completed", 00:18:44.183 "digest": "sha384", 00:18:44.183 "dhgroup": "ffdhe3072" 00:18:44.183 } 00:18:44.183 } 00:18:44.183 ]' 00:18:44.183 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.183 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:44.183 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.183 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:44.183 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.442 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.442 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.443 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.443 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:18:44.443 20:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:18:45.012 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.012 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:45.012 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.012 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.012 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.012 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:45.012 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:45.012 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:45.307 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:18:45.307 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.307 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:45.307 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:45.307 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:45.307 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.307 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:45.307 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.307 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.307 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.307 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:45.307 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:45.307 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:45.566 00:18:45.566 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.566 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.566 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.566 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.566 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.566 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.566 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.826 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.826 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.826 { 00:18:45.826 "cntlid": 71, 00:18:45.826 "qid": 0, 00:18:45.826 "state": "enabled", 00:18:45.826 "thread": "nvmf_tgt_poll_group_000", 00:18:45.826 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:45.826 "listen_address": { 00:18:45.826 "trtype": "TCP", 00:18:45.826 "adrfam": "IPv4", 00:18:45.826 "traddr": "10.0.0.2", 00:18:45.826 "trsvcid": "4420" 00:18:45.826 }, 00:18:45.826 "peer_address": { 00:18:45.826 "trtype": "TCP", 00:18:45.826 "adrfam": "IPv4", 00:18:45.826 "traddr": "10.0.0.1", 00:18:45.826 "trsvcid": "50696" 00:18:45.826 }, 00:18:45.826 "auth": { 00:18:45.826 "state": "completed", 00:18:45.826 "digest": "sha384", 00:18:45.826 "dhgroup": "ffdhe3072" 00:18:45.826 } 00:18:45.826 } 00:18:45.826 ]' 00:18:45.826 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.826 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:45.826 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:45.826 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:45.826 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:45.826 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.826 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.826 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.084 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:18:46.084 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:18:46.652 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.652 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:46.652 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.652 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.652 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.652 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:46.652 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:46.652 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:46.652 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:46.652 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:18:46.652 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:46.652 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:46.652 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:46.652 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:46.652 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.652 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.652 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.652 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.652 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.652 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.652 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.652 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.911 00:18:47.169 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:47.169 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:47.169 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.169 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.169 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.169 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.169 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.169 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.169 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:47.169 { 00:18:47.169 "cntlid": 73, 00:18:47.169 "qid": 0, 00:18:47.169 "state": "enabled", 00:18:47.169 "thread": "nvmf_tgt_poll_group_000", 00:18:47.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:47.169 "listen_address": { 00:18:47.169 "trtype": "TCP", 00:18:47.169 "adrfam": "IPv4", 00:18:47.169 "traddr": "10.0.0.2", 00:18:47.169 "trsvcid": "4420" 00:18:47.169 }, 00:18:47.169 "peer_address": { 00:18:47.169 "trtype": "TCP", 00:18:47.169 "adrfam": "IPv4", 00:18:47.169 "traddr": "10.0.0.1", 00:18:47.169 "trsvcid": "50724" 00:18:47.169 }, 00:18:47.169 "auth": { 00:18:47.169 "state": "completed", 00:18:47.169 "digest": "sha384", 00:18:47.169 "dhgroup": "ffdhe4096" 00:18:47.169 } 00:18:47.169 } 00:18:47.169 ]' 00:18:47.169 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:47.169 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:47.169 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:47.428 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:47.428 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:47.428 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.428 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.428 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.688 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:18:47.688 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:18:47.948 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.209 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:48.209 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.209 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.209 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.209 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:48.209 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:48.209 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:48.209 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:18:48.209 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:48.209 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:48.209 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:48.209 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:48.209 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.209 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.209 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.209 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.209 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.209 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.209 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.209 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.469 00:18:48.469 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.469 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.469 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.729 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.729 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.729 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.729 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.729 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.729 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.729 { 00:18:48.729 "cntlid": 75, 00:18:48.729 "qid": 0, 00:18:48.729 "state": "enabled", 00:18:48.729 "thread": "nvmf_tgt_poll_group_000", 00:18:48.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:48.729 "listen_address": { 00:18:48.729 "trtype": "TCP", 00:18:48.729 "adrfam": "IPv4", 00:18:48.729 "traddr": "10.0.0.2", 00:18:48.729 "trsvcid": "4420" 00:18:48.729 }, 00:18:48.729 "peer_address": { 00:18:48.729 "trtype": "TCP", 00:18:48.729 "adrfam": "IPv4", 00:18:48.729 "traddr": "10.0.0.1", 00:18:48.729 "trsvcid": "50762" 00:18:48.729 }, 00:18:48.729 "auth": { 00:18:48.729 "state": "completed", 00:18:48.729 "digest": "sha384", 00:18:48.729 "dhgroup": "ffdhe4096" 00:18:48.729 } 00:18:48.729 } 00:18:48.729 ]' 00:18:48.729 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.729 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:48.729 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.729 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:48.729 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.989 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.989 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.989 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.989 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:18:48.989 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:18:49.559 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.559 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:49.559 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.559 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.559 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.559 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.559 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:49.559 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:49.819 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:18:49.819 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.819 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:49.819 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:49.819 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:49.819 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.819 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.819 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.819 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.819 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.819 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.819 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.819 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.079 00:18:50.079 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.079 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.079 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.338 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.338 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.338 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.338 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.338 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.339 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.339 { 00:18:50.339 "cntlid": 77, 00:18:50.339 "qid": 0, 00:18:50.339 "state": "enabled", 00:18:50.339 "thread": "nvmf_tgt_poll_group_000", 00:18:50.339 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:50.339 "listen_address": { 00:18:50.339 "trtype": "TCP", 00:18:50.339 "adrfam": "IPv4", 00:18:50.339 "traddr": "10.0.0.2", 00:18:50.339 "trsvcid": "4420" 00:18:50.339 }, 00:18:50.339 "peer_address": { 00:18:50.339 "trtype": "TCP", 00:18:50.339 "adrfam": "IPv4", 00:18:50.339 "traddr": "10.0.0.1", 00:18:50.339 "trsvcid": "49290" 00:18:50.339 }, 00:18:50.339 "auth": { 00:18:50.339 "state": "completed", 00:18:50.339 "digest": "sha384", 00:18:50.339 "dhgroup": "ffdhe4096" 00:18:50.339 } 00:18:50.339 } 00:18:50.339 ]' 00:18:50.339 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.339 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:50.339 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.339 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:50.339 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.339 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.339 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.339 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.598 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:18:50.598 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:18:51.258 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.258 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:51.258 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.258 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.258 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.258 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:51.258 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:51.258 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:51.258 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:18:51.258 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:51.258 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:51.258 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:51.258 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:51.258 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.258 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:51.258 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.258 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.258 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.258 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:51.258 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:51.258 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:51.538 00:18:51.538 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:51.538 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:51.538 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.878 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.878 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.878 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.878 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.878 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.878 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:51.878 { 00:18:51.879 "cntlid": 79, 00:18:51.879 "qid": 0, 00:18:51.879 "state": "enabled", 00:18:51.879 "thread": "nvmf_tgt_poll_group_000", 00:18:51.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:51.879 "listen_address": { 00:18:51.879 "trtype": "TCP", 00:18:51.879 "adrfam": "IPv4", 00:18:51.879 "traddr": "10.0.0.2", 00:18:51.879 "trsvcid": "4420" 00:18:51.879 }, 00:18:51.879 "peer_address": { 00:18:51.879 "trtype": "TCP", 00:18:51.879 "adrfam": "IPv4", 00:18:51.879 "traddr": "10.0.0.1", 00:18:51.879 "trsvcid": "49314" 00:18:51.879 }, 00:18:51.879 "auth": { 00:18:51.879 "state": "completed", 00:18:51.879 "digest": "sha384", 00:18:51.879 "dhgroup": "ffdhe4096" 00:18:51.879 } 00:18:51.879 } 00:18:51.879 ]' 00:18:51.879 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:51.879 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.879 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:51.879 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:51.879 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.879 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.879 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.879 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.144 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:18:52.145 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:18:52.713 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.713 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:52.713 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.713 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.713 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.713 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:52.713 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.714 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:52.714 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:52.714 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:18:52.714 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.714 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:52.714 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:52.714 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:52.714 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.714 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.714 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.714 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.714 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.714 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.714 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.714 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.283 00:18:53.283 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:53.283 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:53.283 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.283 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.283 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.283 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.283 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.283 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.283 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:53.283 { 00:18:53.283 "cntlid": 81, 00:18:53.283 "qid": 0, 00:18:53.283 "state": "enabled", 00:18:53.283 "thread": "nvmf_tgt_poll_group_000", 00:18:53.283 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:53.283 "listen_address": { 00:18:53.283 "trtype": "TCP", 00:18:53.283 "adrfam": "IPv4", 00:18:53.283 "traddr": "10.0.0.2", 00:18:53.283 "trsvcid": "4420" 00:18:53.283 }, 00:18:53.283 "peer_address": { 00:18:53.283 "trtype": "TCP", 00:18:53.283 "adrfam": "IPv4", 00:18:53.283 "traddr": "10.0.0.1", 00:18:53.283 "trsvcid": "49334" 00:18:53.283 }, 00:18:53.283 "auth": { 00:18:53.283 "state": "completed", 00:18:53.283 "digest": "sha384", 00:18:53.283 "dhgroup": "ffdhe6144" 00:18:53.283 } 00:18:53.283 } 00:18:53.283 ]' 00:18:53.283 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.283 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.283 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.542 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:53.542 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.542 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.542 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.542 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.542 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:18:53.542 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:18:54.109 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.109 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:54.109 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.109 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.109 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.109 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.109 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:54.109 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:54.368 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:18:54.368 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.368 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:54.368 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:54.368 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:54.368 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.368 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.368 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.368 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.368 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.368 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.368 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.368 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.627 00:18:54.627 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.627 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.627 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.887 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.887 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.887 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.887 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.887 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.887 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.887 { 00:18:54.887 "cntlid": 83, 00:18:54.887 "qid": 0, 00:18:54.887 "state": "enabled", 00:18:54.887 "thread": "nvmf_tgt_poll_group_000", 00:18:54.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:54.887 "listen_address": { 00:18:54.887 "trtype": "TCP", 00:18:54.887 "adrfam": "IPv4", 00:18:54.887 "traddr": "10.0.0.2", 00:18:54.887 "trsvcid": "4420" 00:18:54.887 }, 00:18:54.887 "peer_address": { 00:18:54.887 "trtype": "TCP", 00:18:54.887 "adrfam": "IPv4", 00:18:54.887 "traddr": "10.0.0.1", 00:18:54.887 "trsvcid": "49348" 00:18:54.887 }, 00:18:54.887 "auth": { 00:18:54.887 "state": "completed", 00:18:54.887 "digest": "sha384", 00:18:54.887 "dhgroup": "ffdhe6144" 00:18:54.887 } 00:18:54.887 } 00:18:54.887 ]' 00:18:54.887 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.887 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:54.887 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.887 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:54.887 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.887 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.887 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.887 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.146 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:18:55.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:18:55.715 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.715 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:55.715 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.715 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.715 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.715 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:55.715 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:55.715 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:55.975 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:18:55.975 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:55.975 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:55.975 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:55.975 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:55.975 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.975 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.975 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.975 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.975 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.975 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.975 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.975 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.235 00:18:56.235 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.235 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.235 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.494 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.494 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.494 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.494 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.494 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.494 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:56.494 { 00:18:56.494 "cntlid": 85, 00:18:56.494 "qid": 0, 00:18:56.494 "state": "enabled", 00:18:56.494 "thread": "nvmf_tgt_poll_group_000", 00:18:56.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:56.494 "listen_address": { 00:18:56.494 "trtype": "TCP", 00:18:56.494 "adrfam": "IPv4", 00:18:56.494 "traddr": "10.0.0.2", 00:18:56.494 "trsvcid": "4420" 00:18:56.494 }, 00:18:56.494 "peer_address": { 00:18:56.494 "trtype": "TCP", 00:18:56.494 "adrfam": "IPv4", 00:18:56.494 "traddr": "10.0.0.1", 00:18:56.494 "trsvcid": "49368" 00:18:56.494 }, 00:18:56.494 "auth": { 00:18:56.494 "state": "completed", 00:18:56.494 "digest": "sha384", 00:18:56.494 "dhgroup": "ffdhe6144" 00:18:56.494 } 00:18:56.494 } 00:18:56.494 ]' 00:18:56.494 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:56.494 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:56.494 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:56.494 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:56.494 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:56.494 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.494 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.494 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.754 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:18:56.754 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:18:57.323 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.323 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:57.323 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.323 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.323 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.323 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.323 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:57.323 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:57.583 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:18:57.583 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.583 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:57.583 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:57.583 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:57.583 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.583 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:57.583 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.583 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.583 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.583 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:57.583 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:57.583 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:57.843 00:18:57.843 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:57.843 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:57.843 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.103 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.103 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.103 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.103 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.103 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.103 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.103 { 00:18:58.103 "cntlid": 87, 00:18:58.103 "qid": 0, 00:18:58.103 "state": "enabled", 00:18:58.103 "thread": "nvmf_tgt_poll_group_000", 00:18:58.103 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:58.103 "listen_address": { 00:18:58.103 "trtype": "TCP", 00:18:58.103 "adrfam": "IPv4", 00:18:58.103 "traddr": "10.0.0.2", 00:18:58.103 "trsvcid": "4420" 00:18:58.103 }, 00:18:58.103 "peer_address": { 00:18:58.103 "trtype": "TCP", 00:18:58.103 "adrfam": "IPv4", 00:18:58.103 "traddr": "10.0.0.1", 00:18:58.103 "trsvcid": "49396" 00:18:58.104 }, 00:18:58.104 "auth": { 00:18:58.104 "state": "completed", 00:18:58.104 "digest": "sha384", 00:18:58.104 "dhgroup": "ffdhe6144" 00:18:58.104 } 00:18:58.104 } 00:18:58.104 ]' 00:18:58.104 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.104 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:58.104 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.104 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:58.104 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.104 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.104 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.104 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.363 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:18:58.364 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:18:58.939 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.939 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:58.939 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.939 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.939 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.939 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:58.939 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:58.939 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:58.940 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:58.940 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:18:58.940 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.940 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:58.940 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:58.940 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:58.940 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.940 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.940 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.940 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.940 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.940 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.940 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.940 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.510 00:18:59.510 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.510 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.510 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.770 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.770 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.770 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.770 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.770 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.770 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.770 { 00:18:59.770 "cntlid": 89, 00:18:59.770 "qid": 0, 00:18:59.770 "state": "enabled", 00:18:59.770 "thread": "nvmf_tgt_poll_group_000", 00:18:59.770 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:59.770 "listen_address": { 00:18:59.770 "trtype": "TCP", 00:18:59.770 "adrfam": "IPv4", 00:18:59.770 "traddr": "10.0.0.2", 00:18:59.770 "trsvcid": "4420" 00:18:59.770 }, 00:18:59.770 "peer_address": { 00:18:59.770 "trtype": "TCP", 00:18:59.770 "adrfam": "IPv4", 00:18:59.770 "traddr": "10.0.0.1", 00:18:59.770 "trsvcid": "49408" 00:18:59.770 }, 00:18:59.770 "auth": { 00:18:59.770 "state": "completed", 00:18:59.770 "digest": "sha384", 00:18:59.770 "dhgroup": "ffdhe8192" 00:18:59.770 } 00:18:59.770 } 00:18:59.770 ]' 00:18:59.770 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.770 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:59.770 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.770 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:59.770 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:59.770 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.770 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.770 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.029 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:19:00.029 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:19:00.598 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.599 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:00.599 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.599 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.599 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.599 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.599 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:00.599 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:00.859 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:19:00.859 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.859 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:00.859 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:00.859 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:00.859 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.859 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.859 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.859 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.859 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.859 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.859 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.859 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.118 00:19:01.118 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.118 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.119 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.378 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.378 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.378 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.378 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.378 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.378 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.378 { 00:19:01.378 "cntlid": 91, 00:19:01.378 "qid": 0, 00:19:01.378 "state": "enabled", 00:19:01.378 "thread": "nvmf_tgt_poll_group_000", 00:19:01.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:01.378 "listen_address": { 00:19:01.378 "trtype": "TCP", 00:19:01.378 "adrfam": "IPv4", 00:19:01.378 "traddr": "10.0.0.2", 00:19:01.378 "trsvcid": "4420" 00:19:01.378 }, 00:19:01.378 "peer_address": { 00:19:01.378 "trtype": "TCP", 00:19:01.378 "adrfam": "IPv4", 00:19:01.378 "traddr": "10.0.0.1", 00:19:01.378 "trsvcid": "46042" 00:19:01.378 }, 00:19:01.378 "auth": { 00:19:01.378 "state": "completed", 00:19:01.378 "digest": "sha384", 00:19:01.378 "dhgroup": "ffdhe8192" 00:19:01.378 } 00:19:01.378 } 00:19:01.378 ]' 00:19:01.378 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.378 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:01.378 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.637 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:01.637 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.637 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.637 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.637 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.637 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:19:01.637 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:19:02.211 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.211 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:02.211 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.211 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.211 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.211 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.211 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:02.211 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:02.470 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:19:02.470 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.470 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:02.470 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:02.470 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:02.470 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.470 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.470 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.470 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.470 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.470 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.470 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.470 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.040 00:19:03.040 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.040 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.040 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.040 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.040 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.040 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.040 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.040 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.040 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:03.040 { 00:19:03.040 "cntlid": 93, 00:19:03.040 "qid": 0, 00:19:03.040 "state": "enabled", 00:19:03.040 "thread": "nvmf_tgt_poll_group_000", 00:19:03.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:03.040 "listen_address": { 00:19:03.040 "trtype": "TCP", 00:19:03.040 "adrfam": "IPv4", 00:19:03.040 "traddr": "10.0.0.2", 00:19:03.040 "trsvcid": "4420" 00:19:03.040 }, 00:19:03.040 "peer_address": { 00:19:03.040 "trtype": "TCP", 00:19:03.040 "adrfam": "IPv4", 00:19:03.040 "traddr": "10.0.0.1", 00:19:03.040 "trsvcid": "46070" 00:19:03.040 }, 00:19:03.040 "auth": { 00:19:03.040 "state": "completed", 00:19:03.040 "digest": "sha384", 00:19:03.040 "dhgroup": "ffdhe8192" 00:19:03.040 } 00:19:03.040 } 00:19:03.040 ]' 00:19:03.040 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:03.300 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:03.300 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:03.300 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:03.300 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.300 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.300 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.300 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.560 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:19:03.560 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:19:04.129 20:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.129 20:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:04.129 20:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.129 20:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.129 20:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.129 20:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:04.129 20:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:04.129 20:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:04.129 20:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:19:04.129 20:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.129 20:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:04.129 20:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:04.129 20:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:04.129 20:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.129 20:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:04.129 20:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.129 20:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.129 20:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.129 20:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:04.129 20:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:04.129 20:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:04.697 00:19:04.697 20:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.697 20:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.697 20:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.957 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.957 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.957 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.957 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.957 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.957 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.957 { 00:19:04.957 "cntlid": 95, 00:19:04.957 "qid": 0, 00:19:04.957 "state": "enabled", 00:19:04.957 "thread": "nvmf_tgt_poll_group_000", 00:19:04.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:04.957 "listen_address": { 00:19:04.957 "trtype": "TCP", 00:19:04.957 "adrfam": "IPv4", 00:19:04.957 "traddr": "10.0.0.2", 00:19:04.957 "trsvcid": "4420" 00:19:04.957 }, 00:19:04.957 "peer_address": { 00:19:04.957 "trtype": "TCP", 00:19:04.957 "adrfam": "IPv4", 00:19:04.957 "traddr": "10.0.0.1", 00:19:04.957 "trsvcid": "46090" 00:19:04.957 }, 00:19:04.957 "auth": { 00:19:04.957 "state": "completed", 00:19:04.957 "digest": "sha384", 00:19:04.957 "dhgroup": "ffdhe8192" 00:19:04.957 } 00:19:04.957 } 00:19:04.957 ]' 00:19:04.957 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.957 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:04.957 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.957 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:04.957 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.957 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.957 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.957 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.216 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:19:05.216 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:19:05.784 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.785 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:05.785 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.785 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.785 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.785 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:05.785 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:05.785 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.785 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:05.785 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:05.785 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:19:05.785 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.785 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:05.785 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:05.785 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:05.785 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.785 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.785 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.785 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.785 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.785 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.785 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.785 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.044 00:19:06.044 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.044 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.044 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.303 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.303 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.303 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.303 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.303 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.303 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.303 { 00:19:06.303 "cntlid": 97, 00:19:06.303 "qid": 0, 00:19:06.303 "state": "enabled", 00:19:06.303 "thread": "nvmf_tgt_poll_group_000", 00:19:06.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:06.303 "listen_address": { 00:19:06.303 "trtype": "TCP", 00:19:06.303 "adrfam": "IPv4", 00:19:06.303 "traddr": "10.0.0.2", 00:19:06.303 "trsvcid": "4420" 00:19:06.303 }, 00:19:06.303 "peer_address": { 00:19:06.303 "trtype": "TCP", 00:19:06.303 "adrfam": "IPv4", 00:19:06.303 "traddr": "10.0.0.1", 00:19:06.303 "trsvcid": "46116" 00:19:06.303 }, 00:19:06.303 "auth": { 00:19:06.303 "state": "completed", 00:19:06.303 "digest": "sha512", 00:19:06.303 "dhgroup": "null" 00:19:06.303 } 00:19:06.303 } 00:19:06.303 ]' 00:19:06.303 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.303 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:06.303 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.303 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:06.303 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.562 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.562 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.562 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.562 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:19:06.562 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:19:07.130 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.130 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:07.130 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.130 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.130 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.130 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.130 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:07.130 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:07.389 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:19:07.389 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.389 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:07.389 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:07.389 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:07.389 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.389 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.389 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.389 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.389 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.389 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.389 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.389 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.648 00:19:07.648 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:07.648 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:07.648 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.906 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.906 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.906 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.906 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.906 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.906 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:07.906 { 00:19:07.906 "cntlid": 99, 00:19:07.906 "qid": 0, 00:19:07.906 "state": "enabled", 00:19:07.906 "thread": "nvmf_tgt_poll_group_000", 00:19:07.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:07.906 "listen_address": { 00:19:07.906 "trtype": "TCP", 00:19:07.906 "adrfam": "IPv4", 00:19:07.906 "traddr": "10.0.0.2", 00:19:07.906 "trsvcid": "4420" 00:19:07.906 }, 00:19:07.906 "peer_address": { 00:19:07.906 "trtype": "TCP", 00:19:07.906 "adrfam": "IPv4", 00:19:07.906 "traddr": "10.0.0.1", 00:19:07.906 "trsvcid": "46142" 00:19:07.906 }, 00:19:07.906 "auth": { 00:19:07.906 "state": "completed", 00:19:07.906 "digest": "sha512", 00:19:07.906 "dhgroup": "null" 00:19:07.906 } 00:19:07.906 } 00:19:07.906 ]' 00:19:07.906 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:07.906 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.906 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:07.906 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:07.906 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:07.906 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.906 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.906 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.165 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:19:08.165 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:19:08.733 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.733 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:08.733 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.733 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.733 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.733 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:08.733 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:08.733 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:08.992 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:19:08.992 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.992 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:08.992 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:08.992 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:08.992 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.992 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.992 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.992 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.992 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.992 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.992 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.992 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.992 00:19:08.992 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.992 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.992 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.272 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.272 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.272 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.272 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.272 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.272 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:09.272 { 00:19:09.272 "cntlid": 101, 00:19:09.272 "qid": 0, 00:19:09.272 "state": "enabled", 00:19:09.272 "thread": "nvmf_tgt_poll_group_000", 00:19:09.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:09.272 "listen_address": { 00:19:09.272 "trtype": "TCP", 00:19:09.272 "adrfam": "IPv4", 00:19:09.272 "traddr": "10.0.0.2", 00:19:09.272 "trsvcid": "4420" 00:19:09.272 }, 00:19:09.272 "peer_address": { 00:19:09.272 "trtype": "TCP", 00:19:09.272 "adrfam": "IPv4", 00:19:09.272 "traddr": "10.0.0.1", 00:19:09.272 "trsvcid": "46166" 00:19:09.272 }, 00:19:09.272 "auth": { 00:19:09.272 "state": "completed", 00:19:09.272 "digest": "sha512", 00:19:09.272 "dhgroup": "null" 00:19:09.272 } 00:19:09.272 } 00:19:09.272 ]' 00:19:09.272 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:09.272 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:09.272 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:09.272 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:09.272 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:09.532 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.532 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.532 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.532 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:19:09.532 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:19:10.100 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.100 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:10.100 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.100 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.100 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.100 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.100 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:10.100 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:10.359 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:19:10.359 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.359 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:10.359 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:10.359 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:10.359 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.359 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:10.359 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.359 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.359 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.359 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:10.359 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:10.359 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:10.619 00:19:10.619 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.619 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.619 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.878 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.878 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.878 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.878 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.878 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.879 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.879 { 00:19:10.879 "cntlid": 103, 00:19:10.879 "qid": 0, 00:19:10.879 "state": "enabled", 00:19:10.879 "thread": "nvmf_tgt_poll_group_000", 00:19:10.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:10.879 "listen_address": { 00:19:10.879 "trtype": "TCP", 00:19:10.879 "adrfam": "IPv4", 00:19:10.879 "traddr": "10.0.0.2", 00:19:10.879 "trsvcid": "4420" 00:19:10.879 }, 00:19:10.879 "peer_address": { 00:19:10.879 "trtype": "TCP", 00:19:10.879 "adrfam": "IPv4", 00:19:10.879 "traddr": "10.0.0.1", 00:19:10.879 "trsvcid": "39744" 00:19:10.879 }, 00:19:10.879 "auth": { 00:19:10.879 "state": "completed", 00:19:10.879 "digest": "sha512", 00:19:10.879 "dhgroup": "null" 00:19:10.879 } 00:19:10.879 } 00:19:10.879 ]' 00:19:10.879 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.879 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.879 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.879 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:10.879 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.879 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.879 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.879 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.138 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:19:11.138 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:19:11.708 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.708 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:11.708 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.708 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.708 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.708 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:11.708 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:11.708 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:11.708 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:11.708 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:19:11.708 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.708 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:11.708 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:11.708 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:11.708 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.708 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.708 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.708 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.708 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.708 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.708 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.708 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.966 00:19:11.966 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.966 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.966 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.225 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.225 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.225 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.225 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.225 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.225 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.225 { 00:19:12.225 "cntlid": 105, 00:19:12.225 "qid": 0, 00:19:12.225 "state": "enabled", 00:19:12.225 "thread": "nvmf_tgt_poll_group_000", 00:19:12.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:12.225 "listen_address": { 00:19:12.225 "trtype": "TCP", 00:19:12.225 "adrfam": "IPv4", 00:19:12.225 "traddr": "10.0.0.2", 00:19:12.225 "trsvcid": "4420" 00:19:12.225 }, 00:19:12.225 "peer_address": { 00:19:12.225 "trtype": "TCP", 00:19:12.225 "adrfam": "IPv4", 00:19:12.225 "traddr": "10.0.0.1", 00:19:12.225 "trsvcid": "39750" 00:19:12.225 }, 00:19:12.225 "auth": { 00:19:12.225 "state": "completed", 00:19:12.225 "digest": "sha512", 00:19:12.225 "dhgroup": "ffdhe2048" 00:19:12.225 } 00:19:12.225 } 00:19:12.225 ]' 00:19:12.225 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:12.225 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.225 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:12.225 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:12.225 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:12.483 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.483 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.483 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.483 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:19:12.483 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:19:13.051 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.051 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:13.051 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.051 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.051 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.051 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:13.051 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:13.051 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:13.310 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:19:13.310 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:13.310 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:13.310 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:13.310 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:13.310 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.310 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.310 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.310 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.310 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.311 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.311 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.311 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.570 00:19:13.570 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.570 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.570 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.830 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.830 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.830 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.830 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.830 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.830 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.830 { 00:19:13.830 "cntlid": 107, 00:19:13.830 "qid": 0, 00:19:13.830 "state": "enabled", 00:19:13.830 "thread": "nvmf_tgt_poll_group_000", 00:19:13.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:13.830 "listen_address": { 00:19:13.830 "trtype": "TCP", 00:19:13.830 "adrfam": "IPv4", 00:19:13.830 "traddr": "10.0.0.2", 00:19:13.830 "trsvcid": "4420" 00:19:13.830 }, 00:19:13.830 "peer_address": { 00:19:13.830 "trtype": "TCP", 00:19:13.830 "adrfam": "IPv4", 00:19:13.830 "traddr": "10.0.0.1", 00:19:13.830 "trsvcid": "39776" 00:19:13.830 }, 00:19:13.830 "auth": { 00:19:13.830 "state": "completed", 00:19:13.830 "digest": "sha512", 00:19:13.830 "dhgroup": "ffdhe2048" 00:19:13.830 } 00:19:13.830 } 00:19:13.830 ]' 00:19:13.830 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.830 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:13.830 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.830 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:13.830 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.830 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.830 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.830 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.089 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:19:14.089 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:19:14.659 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.659 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:14.659 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.659 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.659 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.659 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.659 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:14.659 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:14.659 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:19:14.659 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.659 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:14.659 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:14.659 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:14.659 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.659 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.659 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.659 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.659 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.659 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.659 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.659 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.918 00:19:14.918 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:14.918 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:14.918 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.179 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.179 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.179 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.179 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.179 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.179 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.179 { 00:19:15.179 "cntlid": 109, 00:19:15.179 "qid": 0, 00:19:15.179 "state": "enabled", 00:19:15.179 "thread": "nvmf_tgt_poll_group_000", 00:19:15.179 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:15.179 "listen_address": { 00:19:15.179 "trtype": "TCP", 00:19:15.179 "adrfam": "IPv4", 00:19:15.179 "traddr": "10.0.0.2", 00:19:15.179 "trsvcid": "4420" 00:19:15.179 }, 00:19:15.179 "peer_address": { 00:19:15.179 "trtype": "TCP", 00:19:15.179 "adrfam": "IPv4", 00:19:15.179 "traddr": "10.0.0.1", 00:19:15.179 "trsvcid": "39806" 00:19:15.179 }, 00:19:15.179 "auth": { 00:19:15.179 "state": "completed", 00:19:15.179 "digest": "sha512", 00:19:15.179 "dhgroup": "ffdhe2048" 00:19:15.179 } 00:19:15.179 } 00:19:15.179 ]' 00:19:15.179 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.179 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.179 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.439 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:15.439 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.439 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.439 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.439 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.439 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:19:15.439 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:19:16.005 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.006 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:16.006 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.006 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.006 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.006 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.006 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:16.006 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:16.264 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:19:16.264 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.264 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:16.264 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:16.264 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:16.264 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.264 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:16.264 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.264 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.264 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.264 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:16.264 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:16.264 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:16.522 00:19:16.522 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:16.522 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.522 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.780 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.780 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.780 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.780 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.780 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.780 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:16.780 { 00:19:16.780 "cntlid": 111, 00:19:16.780 "qid": 0, 00:19:16.780 "state": "enabled", 00:19:16.780 "thread": "nvmf_tgt_poll_group_000", 00:19:16.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:16.780 "listen_address": { 00:19:16.780 "trtype": "TCP", 00:19:16.780 "adrfam": "IPv4", 00:19:16.780 "traddr": "10.0.0.2", 00:19:16.780 "trsvcid": "4420" 00:19:16.780 }, 00:19:16.780 "peer_address": { 00:19:16.780 "trtype": "TCP", 00:19:16.780 "adrfam": "IPv4", 00:19:16.780 "traddr": "10.0.0.1", 00:19:16.780 "trsvcid": "39828" 00:19:16.780 }, 00:19:16.780 "auth": { 00:19:16.780 "state": "completed", 00:19:16.780 "digest": "sha512", 00:19:16.780 "dhgroup": "ffdhe2048" 00:19:16.780 } 00:19:16.780 } 00:19:16.780 ]' 00:19:16.780 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:16.780 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:16.780 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:16.780 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:16.780 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:16.780 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.780 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.780 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.037 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:19:17.037 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:19:17.602 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.602 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:17.602 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.602 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.602 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.602 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:17.602 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:17.602 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:17.602 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:17.861 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:19:17.861 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:17.861 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:17.861 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:17.861 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:17.861 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.861 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.861 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.861 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.861 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.861 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.861 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.861 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.119 00:19:18.119 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:18.119 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:18.119 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.119 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.119 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.119 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.119 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.119 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.119 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:18.119 { 00:19:18.119 "cntlid": 113, 00:19:18.119 "qid": 0, 00:19:18.119 "state": "enabled", 00:19:18.119 "thread": "nvmf_tgt_poll_group_000", 00:19:18.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:18.119 "listen_address": { 00:19:18.119 "trtype": "TCP", 00:19:18.119 "adrfam": "IPv4", 00:19:18.119 "traddr": "10.0.0.2", 00:19:18.119 "trsvcid": "4420" 00:19:18.119 }, 00:19:18.119 "peer_address": { 00:19:18.119 "trtype": "TCP", 00:19:18.119 "adrfam": "IPv4", 00:19:18.119 "traddr": "10.0.0.1", 00:19:18.119 "trsvcid": "39854" 00:19:18.119 }, 00:19:18.119 "auth": { 00:19:18.119 "state": "completed", 00:19:18.119 "digest": "sha512", 00:19:18.119 "dhgroup": "ffdhe3072" 00:19:18.119 } 00:19:18.119 } 00:19:18.119 ]' 00:19:18.119 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.376 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:18.376 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.376 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:18.376 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:18.376 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.376 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.376 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.634 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:19:18.634 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:19:19.202 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.202 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:19.202 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.202 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.202 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.202 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:19.202 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:19.202 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:19.202 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:19:19.202 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:19.202 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:19.202 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:19.202 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:19.202 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.202 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.202 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.202 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.202 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.202 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.202 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.202 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.461 00:19:19.461 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.461 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.461 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.720 20:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.720 20:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.720 20:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.720 20:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.720 20:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.720 20:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.720 { 00:19:19.720 "cntlid": 115, 00:19:19.720 "qid": 0, 00:19:19.720 "state": "enabled", 00:19:19.720 "thread": "nvmf_tgt_poll_group_000", 00:19:19.720 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:19.720 "listen_address": { 00:19:19.720 "trtype": "TCP", 00:19:19.720 "adrfam": "IPv4", 00:19:19.720 "traddr": "10.0.0.2", 00:19:19.720 "trsvcid": "4420" 00:19:19.720 }, 00:19:19.720 "peer_address": { 00:19:19.720 "trtype": "TCP", 00:19:19.720 "adrfam": "IPv4", 00:19:19.720 "traddr": "10.0.0.1", 00:19:19.720 "trsvcid": "39878" 00:19:19.720 }, 00:19:19.720 "auth": { 00:19:19.720 "state": "completed", 00:19:19.720 "digest": "sha512", 00:19:19.720 "dhgroup": "ffdhe3072" 00:19:19.720 } 00:19:19.720 } 00:19:19.720 ]' 00:19:19.720 20:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.720 20:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:19.720 20:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:19.720 20:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:19.720 20:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.720 20:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.720 20:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.720 20:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.979 20:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:19:19.979 20:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:19:20.546 20:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.546 20:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:20.546 20:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.546 20:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.546 20:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.546 20:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.546 20:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:20.546 20:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:20.806 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:19:20.806 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.806 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:20.806 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:20.806 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:20.806 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.806 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.806 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.806 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.806 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.806 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.806 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.806 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.065 00:19:21.066 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.066 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.066 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.066 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.325 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.325 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.325 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.325 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.325 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:21.325 { 00:19:21.325 "cntlid": 117, 00:19:21.325 "qid": 0, 00:19:21.325 "state": "enabled", 00:19:21.325 "thread": "nvmf_tgt_poll_group_000", 00:19:21.325 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:21.325 "listen_address": { 00:19:21.325 "trtype": "TCP", 00:19:21.325 "adrfam": "IPv4", 00:19:21.325 "traddr": "10.0.0.2", 00:19:21.325 "trsvcid": "4420" 00:19:21.325 }, 00:19:21.325 "peer_address": { 00:19:21.325 "trtype": "TCP", 00:19:21.325 "adrfam": "IPv4", 00:19:21.325 "traddr": "10.0.0.1", 00:19:21.325 "trsvcid": "33434" 00:19:21.325 }, 00:19:21.325 "auth": { 00:19:21.325 "state": "completed", 00:19:21.325 "digest": "sha512", 00:19:21.325 "dhgroup": "ffdhe3072" 00:19:21.325 } 00:19:21.325 } 00:19:21.325 ]' 00:19:21.325 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:21.325 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:21.325 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:21.325 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:21.325 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:21.325 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.325 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.325 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.584 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:19:21.584 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:19:22.150 20:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.150 20:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:22.150 20:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.150 20:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.150 20:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.150 20:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.150 20:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:22.150 20:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:22.150 20:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:19:22.150 20:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.150 20:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:22.150 20:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:22.150 20:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:22.150 20:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.150 20:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:22.150 20:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.150 20:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.150 20:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.150 20:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:22.150 20:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:22.150 20:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:22.408 00:19:22.408 20:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.408 20:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.408 20:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.665 20:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.665 20:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.665 20:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.665 20:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.665 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.665 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.665 { 00:19:22.665 "cntlid": 119, 00:19:22.665 "qid": 0, 00:19:22.665 "state": "enabled", 00:19:22.665 "thread": "nvmf_tgt_poll_group_000", 00:19:22.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:22.665 "listen_address": { 00:19:22.665 "trtype": "TCP", 00:19:22.665 "adrfam": "IPv4", 00:19:22.665 "traddr": "10.0.0.2", 00:19:22.665 "trsvcid": "4420" 00:19:22.665 }, 00:19:22.665 "peer_address": { 00:19:22.665 "trtype": "TCP", 00:19:22.665 "adrfam": "IPv4", 00:19:22.665 "traddr": "10.0.0.1", 00:19:22.665 "trsvcid": "33450" 00:19:22.665 }, 00:19:22.665 "auth": { 00:19:22.665 "state": "completed", 00:19:22.665 "digest": "sha512", 00:19:22.665 "dhgroup": "ffdhe3072" 00:19:22.665 } 00:19:22.665 } 00:19:22.665 ]' 00:19:22.665 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.665 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:22.665 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.665 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:22.665 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.923 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.923 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.923 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.923 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:19:22.923 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:19:23.490 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.490 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:23.490 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.490 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.490 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.490 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:23.490 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:23.490 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:23.490 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:23.749 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:19:23.750 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.750 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:23.750 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:23.750 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:23.750 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.750 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.750 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.750 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.750 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.750 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.750 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.750 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.009 00:19:24.009 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.009 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.009 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.268 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.268 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.268 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.268 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.268 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.268 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:24.268 { 00:19:24.268 "cntlid": 121, 00:19:24.268 "qid": 0, 00:19:24.268 "state": "enabled", 00:19:24.268 "thread": "nvmf_tgt_poll_group_000", 00:19:24.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:24.268 "listen_address": { 00:19:24.268 "trtype": "TCP", 00:19:24.268 "adrfam": "IPv4", 00:19:24.268 "traddr": "10.0.0.2", 00:19:24.268 "trsvcid": "4420" 00:19:24.268 }, 00:19:24.268 "peer_address": { 00:19:24.268 "trtype": "TCP", 00:19:24.268 "adrfam": "IPv4", 00:19:24.268 "traddr": "10.0.0.1", 00:19:24.268 "trsvcid": "33474" 00:19:24.268 }, 00:19:24.268 "auth": { 00:19:24.268 "state": "completed", 00:19:24.268 "digest": "sha512", 00:19:24.268 "dhgroup": "ffdhe4096" 00:19:24.268 } 00:19:24.268 } 00:19:24.268 ]' 00:19:24.268 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:24.268 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:24.268 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.268 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:24.268 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:24.268 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.268 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.268 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.529 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:19:24.529 20:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:19:25.101 20:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.101 20:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:25.101 20:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.101 20:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.101 20:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.101 20:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.101 20:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:25.101 20:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:25.101 20:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:19:25.101 20:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.101 20:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:25.101 20:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:25.101 20:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:25.101 20:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.101 20:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.101 20:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.101 20:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.359 20:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.359 20:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.359 20:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.359 20:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.617 00:19:25.617 20:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:25.617 20:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:25.617 20:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.618 20:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.618 20:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.618 20:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.618 20:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.618 20:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.618 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.618 { 00:19:25.618 "cntlid": 123, 00:19:25.618 "qid": 0, 00:19:25.618 "state": "enabled", 00:19:25.618 "thread": "nvmf_tgt_poll_group_000", 00:19:25.618 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:25.618 "listen_address": { 00:19:25.618 "trtype": "TCP", 00:19:25.618 "adrfam": "IPv4", 00:19:25.618 "traddr": "10.0.0.2", 00:19:25.618 "trsvcid": "4420" 00:19:25.618 }, 00:19:25.618 "peer_address": { 00:19:25.618 "trtype": "TCP", 00:19:25.618 "adrfam": "IPv4", 00:19:25.618 "traddr": "10.0.0.1", 00:19:25.618 "trsvcid": "33500" 00:19:25.618 }, 00:19:25.618 "auth": { 00:19:25.618 "state": "completed", 00:19:25.618 "digest": "sha512", 00:19:25.618 "dhgroup": "ffdhe4096" 00:19:25.618 } 00:19:25.618 } 00:19:25.618 ]' 00:19:25.618 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:25.618 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:25.618 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:25.877 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:25.877 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.877 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.877 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.877 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.877 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:19:26.135 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:19:26.392 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.393 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:26.393 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.393 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.651 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.651 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:26.651 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:26.651 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:26.651 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:19:26.651 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.651 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:26.651 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:26.651 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:26.651 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.651 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.651 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.651 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.651 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.651 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.651 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.651 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.909 00:19:26.909 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.909 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.909 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.167 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.167 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.167 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.167 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.167 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.167 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:27.167 { 00:19:27.167 "cntlid": 125, 00:19:27.167 "qid": 0, 00:19:27.167 "state": "enabled", 00:19:27.167 "thread": "nvmf_tgt_poll_group_000", 00:19:27.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:27.167 "listen_address": { 00:19:27.167 "trtype": "TCP", 00:19:27.167 "adrfam": "IPv4", 00:19:27.167 "traddr": "10.0.0.2", 00:19:27.167 "trsvcid": "4420" 00:19:27.167 }, 00:19:27.167 "peer_address": { 00:19:27.167 "trtype": "TCP", 00:19:27.167 "adrfam": "IPv4", 00:19:27.167 "traddr": "10.0.0.1", 00:19:27.167 "trsvcid": "33532" 00:19:27.167 }, 00:19:27.167 "auth": { 00:19:27.167 "state": "completed", 00:19:27.167 "digest": "sha512", 00:19:27.167 "dhgroup": "ffdhe4096" 00:19:27.167 } 00:19:27.167 } 00:19:27.167 ]' 00:19:27.167 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:27.167 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:27.167 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:27.167 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:27.167 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:27.428 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.428 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.428 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.428 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:19:27.428 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:19:27.994 20:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.994 20:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:27.994 20:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.994 20:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.994 20:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.994 20:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.994 20:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:27.995 20:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:28.253 20:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:19:28.253 20:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:28.253 20:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:28.253 20:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:28.253 20:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:28.253 20:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.253 20:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:28.253 20:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.253 20:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.253 20:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.253 20:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:28.253 20:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:28.253 20:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:28.511 00:19:28.511 20:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.511 20:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.511 20:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.771 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.771 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.771 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.771 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.771 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.771 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.771 { 00:19:28.771 "cntlid": 127, 00:19:28.771 "qid": 0, 00:19:28.771 "state": "enabled", 00:19:28.771 "thread": "nvmf_tgt_poll_group_000", 00:19:28.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:28.771 "listen_address": { 00:19:28.771 "trtype": "TCP", 00:19:28.771 "adrfam": "IPv4", 00:19:28.771 "traddr": "10.0.0.2", 00:19:28.771 "trsvcid": "4420" 00:19:28.771 }, 00:19:28.771 "peer_address": { 00:19:28.771 "trtype": "TCP", 00:19:28.771 "adrfam": "IPv4", 00:19:28.771 "traddr": "10.0.0.1", 00:19:28.771 "trsvcid": "33572" 00:19:28.771 }, 00:19:28.771 "auth": { 00:19:28.771 "state": "completed", 00:19:28.771 "digest": "sha512", 00:19:28.771 "dhgroup": "ffdhe4096" 00:19:28.771 } 00:19:28.771 } 00:19:28.771 ]' 00:19:28.771 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.771 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.771 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.771 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:28.771 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.771 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.771 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.771 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.032 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:19:29.032 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:19:29.598 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.599 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:29.599 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.599 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.599 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.599 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:29.599 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.599 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:29.599 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:29.858 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:19:29.858 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.858 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:29.858 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:29.858 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:29.858 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.858 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.858 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.858 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.858 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.858 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.858 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.858 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.117 00:19:30.117 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.117 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.117 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.378 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.378 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.378 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.378 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.378 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.378 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.378 { 00:19:30.378 "cntlid": 129, 00:19:30.378 "qid": 0, 00:19:30.378 "state": "enabled", 00:19:30.378 "thread": "nvmf_tgt_poll_group_000", 00:19:30.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:30.378 "listen_address": { 00:19:30.378 "trtype": "TCP", 00:19:30.378 "adrfam": "IPv4", 00:19:30.378 "traddr": "10.0.0.2", 00:19:30.378 "trsvcid": "4420" 00:19:30.378 }, 00:19:30.378 "peer_address": { 00:19:30.378 "trtype": "TCP", 00:19:30.378 "adrfam": "IPv4", 00:19:30.378 "traddr": "10.0.0.1", 00:19:30.378 "trsvcid": "33598" 00:19:30.378 }, 00:19:30.378 "auth": { 00:19:30.378 "state": "completed", 00:19:30.378 "digest": "sha512", 00:19:30.378 "dhgroup": "ffdhe6144" 00:19:30.378 } 00:19:30.378 } 00:19:30.378 ]' 00:19:30.378 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:30.378 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:30.378 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.378 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:30.378 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.378 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.378 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.378 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.637 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:19:30.637 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:19:31.209 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.209 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:31.209 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.209 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.209 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.209 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:31.209 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:31.209 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:31.209 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:19:31.209 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:31.209 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:31.209 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:31.209 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:31.209 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.209 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.209 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.209 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.209 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.209 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.209 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.209 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.779 00:19:31.779 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.779 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.779 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.779 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.779 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.779 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.779 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.779 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.779 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.779 { 00:19:31.779 "cntlid": 131, 00:19:31.779 "qid": 0, 00:19:31.779 "state": "enabled", 00:19:31.779 "thread": "nvmf_tgt_poll_group_000", 00:19:31.779 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:31.779 "listen_address": { 00:19:31.779 "trtype": "TCP", 00:19:31.779 "adrfam": "IPv4", 00:19:31.779 "traddr": "10.0.0.2", 00:19:31.779 "trsvcid": "4420" 00:19:31.779 }, 00:19:31.779 "peer_address": { 00:19:31.779 "trtype": "TCP", 00:19:31.779 "adrfam": "IPv4", 00:19:31.779 "traddr": "10.0.0.1", 00:19:31.779 "trsvcid": "46786" 00:19:31.779 }, 00:19:31.779 "auth": { 00:19:31.779 "state": "completed", 00:19:31.779 "digest": "sha512", 00:19:31.779 "dhgroup": "ffdhe6144" 00:19:31.779 } 00:19:31.779 } 00:19:31.779 ]' 00:19:31.779 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:32.040 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:32.040 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:32.040 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:32.040 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:32.040 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.040 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.040 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.298 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:19:32.298 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:19:32.868 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.868 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:32.868 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.868 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.868 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.868 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.868 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:32.868 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:32.868 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:19:32.868 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.868 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:32.868 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:32.868 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:32.868 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.868 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.868 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.868 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.868 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.868 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.868 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.868 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.127 00:19:33.127 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.127 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.127 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.385 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.385 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.385 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.385 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.385 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.385 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.385 { 00:19:33.385 "cntlid": 133, 00:19:33.385 "qid": 0, 00:19:33.385 "state": "enabled", 00:19:33.385 "thread": "nvmf_tgt_poll_group_000", 00:19:33.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:33.385 "listen_address": { 00:19:33.385 "trtype": "TCP", 00:19:33.385 "adrfam": "IPv4", 00:19:33.385 "traddr": "10.0.0.2", 00:19:33.385 "trsvcid": "4420" 00:19:33.385 }, 00:19:33.385 "peer_address": { 00:19:33.385 "trtype": "TCP", 00:19:33.385 "adrfam": "IPv4", 00:19:33.385 "traddr": "10.0.0.1", 00:19:33.385 "trsvcid": "46816" 00:19:33.385 }, 00:19:33.385 "auth": { 00:19:33.385 "state": "completed", 00:19:33.385 "digest": "sha512", 00:19:33.385 "dhgroup": "ffdhe6144" 00:19:33.385 } 00:19:33.385 } 00:19:33.385 ]' 00:19:33.385 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.385 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:33.385 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.643 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:33.643 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.643 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.643 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.643 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.643 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:19:33.643 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:19:34.231 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.231 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:34.231 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.231 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.231 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.231 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.231 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:34.231 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:34.540 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:19:34.540 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.540 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:34.540 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:34.540 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:34.540 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.540 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:34.540 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.540 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.540 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.540 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:34.540 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:34.540 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:34.799 00:19:34.799 20:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:34.799 20:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:34.799 20:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.059 20:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.059 20:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.059 20:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.059 20:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.059 20:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.059 20:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.059 { 00:19:35.059 "cntlid": 135, 00:19:35.059 "qid": 0, 00:19:35.059 "state": "enabled", 00:19:35.059 "thread": "nvmf_tgt_poll_group_000", 00:19:35.059 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:35.059 "listen_address": { 00:19:35.059 "trtype": "TCP", 00:19:35.059 "adrfam": "IPv4", 00:19:35.059 "traddr": "10.0.0.2", 00:19:35.059 "trsvcid": "4420" 00:19:35.059 }, 00:19:35.059 "peer_address": { 00:19:35.059 "trtype": "TCP", 00:19:35.059 "adrfam": "IPv4", 00:19:35.059 "traddr": "10.0.0.1", 00:19:35.059 "trsvcid": "46846" 00:19:35.059 }, 00:19:35.059 "auth": { 00:19:35.059 "state": "completed", 00:19:35.059 "digest": "sha512", 00:19:35.059 "dhgroup": "ffdhe6144" 00:19:35.059 } 00:19:35.059 } 00:19:35.059 ]' 00:19:35.059 20:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.059 20:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:35.059 20:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.059 20:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:35.059 20:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.059 20:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.059 20:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.059 20:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.318 20:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:19:35.318 20:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:19:35.885 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.885 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:35.885 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.885 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.885 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.885 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:35.885 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.885 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:35.885 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:36.143 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:19:36.143 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.144 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:36.144 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:36.144 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:36.144 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.144 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.144 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.144 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.144 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.144 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.144 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.144 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.403 00:19:36.403 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:36.403 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:36.403 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.662 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.662 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.662 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.662 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.662 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.662 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.662 { 00:19:36.662 "cntlid": 137, 00:19:36.662 "qid": 0, 00:19:36.662 "state": "enabled", 00:19:36.662 "thread": "nvmf_tgt_poll_group_000", 00:19:36.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:36.662 "listen_address": { 00:19:36.662 "trtype": "TCP", 00:19:36.662 "adrfam": "IPv4", 00:19:36.662 "traddr": "10.0.0.2", 00:19:36.662 "trsvcid": "4420" 00:19:36.662 }, 00:19:36.662 "peer_address": { 00:19:36.662 "trtype": "TCP", 00:19:36.662 "adrfam": "IPv4", 00:19:36.662 "traddr": "10.0.0.1", 00:19:36.662 "trsvcid": "46864" 00:19:36.662 }, 00:19:36.662 "auth": { 00:19:36.662 "state": "completed", 00:19:36.662 "digest": "sha512", 00:19:36.662 "dhgroup": "ffdhe8192" 00:19:36.662 } 00:19:36.662 } 00:19:36.662 ]' 00:19:36.662 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.662 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:36.662 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.934 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:36.934 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.934 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.934 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.934 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.934 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:19:36.934 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:19:37.503 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.503 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:37.503 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.503 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.503 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.503 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:37.503 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:37.503 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:37.763 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:19:37.763 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.763 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:37.763 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:37.763 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:37.763 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.763 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.763 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.763 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.763 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.763 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.763 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.763 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.334 00:19:38.334 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:38.334 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:38.334 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.334 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.334 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.334 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.334 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.334 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.334 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.334 { 00:19:38.334 "cntlid": 139, 00:19:38.334 "qid": 0, 00:19:38.334 "state": "enabled", 00:19:38.334 "thread": "nvmf_tgt_poll_group_000", 00:19:38.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:38.334 "listen_address": { 00:19:38.334 "trtype": "TCP", 00:19:38.334 "adrfam": "IPv4", 00:19:38.334 "traddr": "10.0.0.2", 00:19:38.334 "trsvcid": "4420" 00:19:38.334 }, 00:19:38.334 "peer_address": { 00:19:38.334 "trtype": "TCP", 00:19:38.334 "adrfam": "IPv4", 00:19:38.334 "traddr": "10.0.0.1", 00:19:38.334 "trsvcid": "46904" 00:19:38.334 }, 00:19:38.334 "auth": { 00:19:38.334 "state": "completed", 00:19:38.334 "digest": "sha512", 00:19:38.334 "dhgroup": "ffdhe8192" 00:19:38.334 } 00:19:38.334 } 00:19:38.334 ]' 00:19:38.334 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.595 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:38.595 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.595 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:38.595 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.595 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.595 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.595 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.595 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:19:38.595 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: --dhchap-ctrl-secret DHHC-1:02:ZGNkMTg1OTdhNTIwMDBmY2YwNTkzNjY4MmNmOTExYTA4MjBjNmJlZmVkMjdmY2Vk84hIQA==: 00:19:39.207 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.207 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:39.207 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.207 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.207 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.207 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.207 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:39.207 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:39.467 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:19:39.467 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.467 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:39.467 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:39.467 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:39.467 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.467 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.467 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.467 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.467 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.467 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.467 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.467 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.036 00:19:40.036 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.036 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.036 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.036 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.036 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.036 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.036 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.036 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.036 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.036 { 00:19:40.036 "cntlid": 141, 00:19:40.036 "qid": 0, 00:19:40.036 "state": "enabled", 00:19:40.036 "thread": "nvmf_tgt_poll_group_000", 00:19:40.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:40.036 "listen_address": { 00:19:40.036 "trtype": "TCP", 00:19:40.036 "adrfam": "IPv4", 00:19:40.036 "traddr": "10.0.0.2", 00:19:40.036 "trsvcid": "4420" 00:19:40.036 }, 00:19:40.036 "peer_address": { 00:19:40.036 "trtype": "TCP", 00:19:40.036 "adrfam": "IPv4", 00:19:40.036 "traddr": "10.0.0.1", 00:19:40.036 "trsvcid": "46918" 00:19:40.036 }, 00:19:40.036 "auth": { 00:19:40.036 "state": "completed", 00:19:40.036 "digest": "sha512", 00:19:40.036 "dhgroup": "ffdhe8192" 00:19:40.036 } 00:19:40.036 } 00:19:40.036 ]' 00:19:40.036 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.036 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:40.036 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.295 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:40.295 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.295 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.295 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.295 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.295 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:19:40.295 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:01:MjhmZDI3ODdjYzJhNjVmYjIwZTA2MzJiYjlkYzIzNmQ3Zr7S: 00:19:40.865 20:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.865 20:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:40.865 20:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.865 20:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.865 20:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.865 20:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.865 20:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:40.865 20:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:41.125 20:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:19:41.125 20:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.125 20:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:41.125 20:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:41.125 20:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:41.125 20:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.125 20:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:41.125 20:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.125 20:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.125 20:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.125 20:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:41.125 20:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:41.125 20:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:41.694 00:19:41.694 20:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.694 20:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.694 20:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.694 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.694 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.694 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.694 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.694 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.694 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:41.694 { 00:19:41.694 "cntlid": 143, 00:19:41.694 "qid": 0, 00:19:41.694 "state": "enabled", 00:19:41.694 "thread": "nvmf_tgt_poll_group_000", 00:19:41.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:41.694 "listen_address": { 00:19:41.694 "trtype": "TCP", 00:19:41.694 "adrfam": "IPv4", 00:19:41.694 "traddr": "10.0.0.2", 00:19:41.694 "trsvcid": "4420" 00:19:41.694 }, 00:19:41.694 "peer_address": { 00:19:41.694 "trtype": "TCP", 00:19:41.694 "adrfam": "IPv4", 00:19:41.694 "traddr": "10.0.0.1", 00:19:41.694 "trsvcid": "54124" 00:19:41.694 }, 00:19:41.694 "auth": { 00:19:41.694 "state": "completed", 00:19:41.694 "digest": "sha512", 00:19:41.694 "dhgroup": "ffdhe8192" 00:19:41.694 } 00:19:41.694 } 00:19:41.694 ]' 00:19:41.694 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:41.694 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:41.694 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:41.953 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:41.953 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:41.953 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.953 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.953 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.212 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:19:42.212 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:19:42.472 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.732 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:42.732 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.732 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.732 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.732 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:42.732 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:19:42.732 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:42.732 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:42.732 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:42.732 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:42.732 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:19:42.732 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.732 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:42.732 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:42.732 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:42.732 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.732 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.732 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.732 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.732 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.732 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.732 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.732 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.301 00:19:43.301 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.301 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.301 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.560 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.560 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.560 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.560 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.560 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.560 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.560 { 00:19:43.560 "cntlid": 145, 00:19:43.560 "qid": 0, 00:19:43.560 "state": "enabled", 00:19:43.560 "thread": "nvmf_tgt_poll_group_000", 00:19:43.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:43.560 "listen_address": { 00:19:43.560 "trtype": "TCP", 00:19:43.560 "adrfam": "IPv4", 00:19:43.560 "traddr": "10.0.0.2", 00:19:43.560 "trsvcid": "4420" 00:19:43.560 }, 00:19:43.560 "peer_address": { 00:19:43.560 "trtype": "TCP", 00:19:43.560 "adrfam": "IPv4", 00:19:43.560 "traddr": "10.0.0.1", 00:19:43.560 "trsvcid": "54158" 00:19:43.560 }, 00:19:43.560 "auth": { 00:19:43.560 "state": "completed", 00:19:43.560 "digest": "sha512", 00:19:43.560 "dhgroup": "ffdhe8192" 00:19:43.560 } 00:19:43.560 } 00:19:43.560 ]' 00:19:43.560 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.560 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:43.560 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.560 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:43.560 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.560 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.560 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.560 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.820 20:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:19:43.820 20:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWZhZWI1MmI0Mzc0MzAwMmNjOTMxNTVhMzU4ZDAzNzRkMTYyMTE4NmI0NjkxMjAwjP0y2w==: --dhchap-ctrl-secret DHHC-1:03:YzBkNjNiNjU2NDY4NDMyMzkwZjk4Y2JhZmJhOTBhYmE1MzY4YTgzMGQ3NTNkMDA5NWZmZDc0Njg1YTJkMzQ1MWuVNNA=: 00:19:44.390 20:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.390 20:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:44.390 20:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.390 20:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.390 20:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.390 20:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 00:19:44.390 20:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.390 20:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.390 20:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.390 20:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:19:44.390 20:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:44.390 20:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:19:44.390 20:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:44.390 20:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:44.390 20:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:44.390 20:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:44.390 20:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:19:44.390 20:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:44.390 20:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:44.651 request: 00:19:44.651 { 00:19:44.651 "name": "nvme0", 00:19:44.651 "trtype": "tcp", 00:19:44.651 "traddr": "10.0.0.2", 00:19:44.651 "adrfam": "ipv4", 00:19:44.651 "trsvcid": "4420", 00:19:44.651 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:44.651 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:44.651 "prchk_reftag": false, 00:19:44.651 "prchk_guard": false, 00:19:44.651 "hdgst": false, 00:19:44.651 "ddgst": false, 00:19:44.651 "dhchap_key": "key2", 00:19:44.651 "allow_unrecognized_csi": false, 00:19:44.651 "method": "bdev_nvme_attach_controller", 00:19:44.651 "req_id": 1 00:19:44.651 } 00:19:44.651 Got JSON-RPC error response 00:19:44.651 response: 00:19:44.651 { 00:19:44.651 "code": -5, 00:19:44.651 "message": "Input/output error" 00:19:44.651 } 00:19:44.651 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:44.651 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:44.651 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:44.651 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:44.651 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:44.651 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.651 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.651 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.651 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.651 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.651 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.651 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.651 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:44.651 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:44.651 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:44.651 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:44.651 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:44.651 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:44.651 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:44.651 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:44.651 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:44.651 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:45.220 request: 00:19:45.220 { 00:19:45.220 "name": "nvme0", 00:19:45.220 "trtype": "tcp", 00:19:45.220 "traddr": "10.0.0.2", 00:19:45.220 "adrfam": "ipv4", 00:19:45.220 "trsvcid": "4420", 00:19:45.220 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:45.220 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:45.220 "prchk_reftag": false, 00:19:45.220 "prchk_guard": false, 00:19:45.220 "hdgst": false, 00:19:45.220 "ddgst": false, 00:19:45.220 "dhchap_key": "key1", 00:19:45.220 "dhchap_ctrlr_key": "ckey2", 00:19:45.220 "allow_unrecognized_csi": false, 00:19:45.220 "method": "bdev_nvme_attach_controller", 00:19:45.220 "req_id": 1 00:19:45.220 } 00:19:45.220 Got JSON-RPC error response 00:19:45.220 response: 00:19:45.220 { 00:19:45.220 "code": -5, 00:19:45.220 "message": "Input/output error" 00:19:45.220 } 00:19:45.220 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:45.220 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:45.220 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:45.220 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:45.220 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:45.220 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.220 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.220 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.221 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 00:19:45.221 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.221 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.221 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.221 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.221 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:45.221 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.221 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:45.221 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:45.221 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:45.221 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:45.221 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.221 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.221 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.481 request: 00:19:45.481 { 00:19:45.481 "name": "nvme0", 00:19:45.481 "trtype": "tcp", 00:19:45.481 "traddr": "10.0.0.2", 00:19:45.481 "adrfam": "ipv4", 00:19:45.481 "trsvcid": "4420", 00:19:45.481 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:45.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:45.481 "prchk_reftag": false, 00:19:45.481 "prchk_guard": false, 00:19:45.481 "hdgst": false, 00:19:45.481 "ddgst": false, 00:19:45.481 "dhchap_key": "key1", 00:19:45.481 "dhchap_ctrlr_key": "ckey1", 00:19:45.481 "allow_unrecognized_csi": false, 00:19:45.481 "method": "bdev_nvme_attach_controller", 00:19:45.481 "req_id": 1 00:19:45.481 } 00:19:45.481 Got JSON-RPC error response 00:19:45.481 response: 00:19:45.481 { 00:19:45.481 "code": -5, 00:19:45.481 "message": "Input/output error" 00:19:45.481 } 00:19:45.741 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:45.741 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:45.741 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:45.741 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:45.741 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:45.741 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.741 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.741 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.741 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 343702 00:19:45.741 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 343702 ']' 00:19:45.741 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 343702 00:19:45.741 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:45.741 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:45.741 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 343702 00:19:45.741 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:45.741 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:45.741 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 343702' 00:19:45.741 killing process with pid 343702 00:19:45.741 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 343702 00:19:45.741 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 343702 00:19:45.741 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:45.741 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:45.741 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:45.741 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.741 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=366932 00:19:45.741 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:45.741 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 366932 00:19:45.741 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 366932 ']' 00:19:45.741 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.741 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:45.741 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.741 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:45.741 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.001 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:46.001 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:46.001 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:46.001 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:46.001 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.001 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.001 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:46.001 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 366932 00:19:46.001 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 366932 ']' 00:19:46.001 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.001 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:46.001 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.001 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:46.001 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.261 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:46.261 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:46.261 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:19:46.261 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.261 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.261 null0 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.6gY 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.pnw ]] 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pnw 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.toY 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.2VJ ]] 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2VJ 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.txi 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.NLa ]] 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NLa 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Caz 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:46.521 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:47.090 nvme0n1 00:19:47.090 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.090 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.090 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.349 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.349 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.349 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.349 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.349 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.349 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.349 { 00:19:47.349 "cntlid": 1, 00:19:47.349 "qid": 0, 00:19:47.349 "state": "enabled", 00:19:47.349 "thread": "nvmf_tgt_poll_group_000", 00:19:47.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:47.349 "listen_address": { 00:19:47.349 "trtype": "TCP", 00:19:47.349 "adrfam": "IPv4", 00:19:47.349 "traddr": "10.0.0.2", 00:19:47.349 "trsvcid": "4420" 00:19:47.349 }, 00:19:47.349 "peer_address": { 00:19:47.349 "trtype": "TCP", 00:19:47.349 "adrfam": "IPv4", 00:19:47.349 "traddr": "10.0.0.1", 00:19:47.349 "trsvcid": "54232" 00:19:47.349 }, 00:19:47.349 "auth": { 00:19:47.349 "state": "completed", 00:19:47.349 "digest": "sha512", 00:19:47.349 "dhgroup": "ffdhe8192" 00:19:47.349 } 00:19:47.349 } 00:19:47.349 ]' 00:19:47.349 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.349 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:47.349 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.349 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:47.349 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.609 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.609 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.609 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.609 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:19:47.609 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:19:48.179 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.179 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:48.179 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.179 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.179 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.179 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:48.179 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.179 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.179 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.179 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:48.179 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:48.439 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:48.439 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:48.439 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:48.439 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:48.439 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:48.439 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:48.439 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:48.439 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:48.439 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:48.439 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:48.700 request: 00:19:48.700 { 00:19:48.700 "name": "nvme0", 00:19:48.700 "trtype": "tcp", 00:19:48.700 "traddr": "10.0.0.2", 00:19:48.700 "adrfam": "ipv4", 00:19:48.700 "trsvcid": "4420", 00:19:48.700 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:48.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:48.700 "prchk_reftag": false, 00:19:48.700 "prchk_guard": false, 00:19:48.700 "hdgst": false, 00:19:48.700 "ddgst": false, 00:19:48.700 "dhchap_key": "key3", 00:19:48.700 "allow_unrecognized_csi": false, 00:19:48.700 "method": "bdev_nvme_attach_controller", 00:19:48.700 "req_id": 1 00:19:48.700 } 00:19:48.700 Got JSON-RPC error response 00:19:48.700 response: 00:19:48.700 { 00:19:48.700 "code": -5, 00:19:48.700 "message": "Input/output error" 00:19:48.700 } 00:19:48.700 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:48.700 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:48.700 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:48.700 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:48.700 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:19:48.700 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:19:48.700 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:48.700 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:48.700 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:48.700 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:48.700 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:48.700 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:48.700 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:48.700 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:48.700 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:48.700 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:48.700 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:48.700 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:48.960 request: 00:19:48.960 { 00:19:48.960 "name": "nvme0", 00:19:48.960 "trtype": "tcp", 00:19:48.960 "traddr": "10.0.0.2", 00:19:48.960 "adrfam": "ipv4", 00:19:48.960 "trsvcid": "4420", 00:19:48.960 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:48.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:48.960 "prchk_reftag": false, 00:19:48.960 "prchk_guard": false, 00:19:48.960 "hdgst": false, 00:19:48.960 "ddgst": false, 00:19:48.960 "dhchap_key": "key3", 00:19:48.960 "allow_unrecognized_csi": false, 00:19:48.960 "method": "bdev_nvme_attach_controller", 00:19:48.960 "req_id": 1 00:19:48.960 } 00:19:48.960 Got JSON-RPC error response 00:19:48.960 response: 00:19:48.960 { 00:19:48.960 "code": -5, 00:19:48.960 "message": "Input/output error" 00:19:48.960 } 00:19:48.960 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:48.960 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:48.960 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:48.960 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:48.960 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:48.960 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:19:48.960 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:48.960 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:48.960 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:48.960 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:49.220 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:49.220 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.220 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.220 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.220 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:49.220 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.220 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.220 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.220 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:49.220 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:49.220 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:49.220 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:49.220 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:49.220 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:49.220 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:49.220 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:49.220 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:49.220 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:49.480 request: 00:19:49.480 { 00:19:49.480 "name": "nvme0", 00:19:49.480 "trtype": "tcp", 00:19:49.480 "traddr": "10.0.0.2", 00:19:49.480 "adrfam": "ipv4", 00:19:49.480 "trsvcid": "4420", 00:19:49.480 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:49.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:49.480 "prchk_reftag": false, 00:19:49.480 "prchk_guard": false, 00:19:49.480 "hdgst": false, 00:19:49.480 "ddgst": false, 00:19:49.480 "dhchap_key": "key0", 00:19:49.480 "dhchap_ctrlr_key": "key1", 00:19:49.480 "allow_unrecognized_csi": false, 00:19:49.480 "method": "bdev_nvme_attach_controller", 00:19:49.480 "req_id": 1 00:19:49.480 } 00:19:49.480 Got JSON-RPC error response 00:19:49.480 response: 00:19:49.480 { 00:19:49.480 "code": -5, 00:19:49.480 "message": "Input/output error" 00:19:49.480 } 00:19:49.480 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:49.480 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:49.480 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:49.480 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:49.480 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:19:49.480 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:49.480 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:49.740 nvme0n1 00:19:49.740 20:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:19:49.740 20:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:19:49.740 20:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.999 20:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.999 20:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.999 20:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.259 20:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 00:19:50.259 20:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.259 20:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.259 20:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.259 20:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:50.259 20:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:50.259 20:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:50.828 nvme0n1 00:19:50.828 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:19:50.828 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:19:50.828 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.088 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.088 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:51.088 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.088 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.088 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.088 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:19:51.088 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:19:51.088 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.347 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.347 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:19:51.347 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: --dhchap-ctrl-secret DHHC-1:03:ZTc1NzVlOWY2ZmJlM2Q4NGEwMzhlNGE1ZjM1MGY3YTc1OTFhODE2MjRkYzM0OTdhZmMyMzFiMGExMzZhNjU3ZvS86Mg=: 00:19:51.915 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:19:51.915 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:19:51.915 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:19:51.915 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:19:51.915 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:19:51.915 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:19:51.915 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:19:51.915 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.915 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.915 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:19:51.915 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:51.915 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:19:51.915 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:51.915 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:51.915 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:51.915 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:51.915 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:51.915 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:51.915 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:52.482 request: 00:19:52.482 { 00:19:52.482 "name": "nvme0", 00:19:52.482 "trtype": "tcp", 00:19:52.482 "traddr": "10.0.0.2", 00:19:52.482 "adrfam": "ipv4", 00:19:52.482 "trsvcid": "4420", 00:19:52.482 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:52.482 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:52.482 "prchk_reftag": false, 00:19:52.482 "prchk_guard": false, 00:19:52.482 "hdgst": false, 00:19:52.482 "ddgst": false, 00:19:52.482 "dhchap_key": "key1", 00:19:52.482 "allow_unrecognized_csi": false, 00:19:52.482 "method": "bdev_nvme_attach_controller", 00:19:52.482 "req_id": 1 00:19:52.482 } 00:19:52.482 Got JSON-RPC error response 00:19:52.482 response: 00:19:52.482 { 00:19:52.482 "code": -5, 00:19:52.482 "message": "Input/output error" 00:19:52.482 } 00:19:52.482 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:52.482 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:52.482 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:52.482 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:52.482 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:52.482 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:52.482 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:53.051 nvme0n1 00:19:53.051 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:19:53.051 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:19:53.051 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.310 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.310 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.310 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.310 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:53.310 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.310 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.310 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.310 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:19:53.310 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:53.310 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:53.569 nvme0n1 00:19:53.569 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:19:53.569 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:19:53.569 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.828 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.828 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.828 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.087 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:54.087 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.087 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.087 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.087 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: '' 2s 00:19:54.087 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:54.087 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:54.087 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: 00:19:54.087 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:19:54.087 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:54.087 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:54.087 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: ]] 00:19:54.087 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OGM2YTNjZjgxZWZlNTAwYmM1M2MxMjdhNTVkMWZiN2Lf/f+2: 00:19:54.087 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:19:54.087 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:54.087 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:55.992 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:19:55.992 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:55.992 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:55.992 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:55.992 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:55.992 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:55.992 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:55.992 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:19:55.992 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.992 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.992 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.992 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: 2s 00:19:55.992 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:55.992 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:55.992 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:19:55.992 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: 00:19:55.992 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:55.992 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:55.992 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:19:55.992 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: ]] 00:19:55.992 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YWJmZThkNGRjZWE3MDA5M2FmYjRmNWJjYWM3YWYyN2YxNDE3N2Y4MzRmYTMzMGMyh3ZlfA==: 00:19:55.992 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:55.992 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:58.530 20:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:19:58.530 20:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:58.530 20:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:58.530 20:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:58.530 20:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:58.530 20:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:58.530 20:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:58.530 20:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.530 20:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:58.530 20:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.530 20:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.530 20:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.530 20:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:58.530 20:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:58.530 20:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:58.790 nvme0n1 00:19:58.790 20:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:58.790 20:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.790 20:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.790 20:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.790 20:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:58.790 20:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:59.360 20:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:19:59.360 20:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:19:59.360 20:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.360 20:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.360 20:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:59.360 20:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.360 20:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.360 20:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.360 20:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:19:59.360 20:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:19:59.620 20:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:19:59.620 20:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:19:59.620 20:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.880 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.880 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:59.880 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.880 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.880 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.880 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:59.880 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:59.880 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:59.880 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:59.880 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:59.880 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:59.880 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:59.880 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:59.880 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:00.140 request: 00:20:00.140 { 00:20:00.140 "name": "nvme0", 00:20:00.140 "dhchap_key": "key1", 00:20:00.140 "dhchap_ctrlr_key": "key3", 00:20:00.140 "method": "bdev_nvme_set_keys", 00:20:00.140 "req_id": 1 00:20:00.140 } 00:20:00.140 Got JSON-RPC error response 00:20:00.140 response: 00:20:00.140 { 00:20:00.140 "code": -13, 00:20:00.140 "message": "Permission denied" 00:20:00.140 } 00:20:00.140 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:00.140 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:00.140 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:00.140 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:00.140 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:00.140 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:00.140 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.400 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:20:00.400 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:20:01.339 20:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:01.339 20:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:01.339 20:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.596 20:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:20:01.596 20:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:01.596 20:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.596 20:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.596 20:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.596 20:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:01.597 20:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:01.597 20:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:02.533 nvme0n1 00:20:02.533 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:02.533 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.533 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.533 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.533 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:02.533 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:02.533 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:02.533 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:20:02.533 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:02.533 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:20:02.533 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:02.533 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:02.533 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:02.793 request: 00:20:02.793 { 00:20:02.793 "name": "nvme0", 00:20:02.793 "dhchap_key": "key2", 00:20:02.793 "dhchap_ctrlr_key": "key0", 00:20:02.793 "method": "bdev_nvme_set_keys", 00:20:02.793 "req_id": 1 00:20:02.793 } 00:20:02.793 Got JSON-RPC error response 00:20:02.793 response: 00:20:02.793 { 00:20:02.793 "code": -13, 00:20:02.793 "message": "Permission denied" 00:20:02.793 } 00:20:02.793 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:02.793 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:02.793 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:02.793 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:02.793 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:20:02.793 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:20:02.793 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.051 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:20:03.051 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:20:03.988 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:20:03.988 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:20:03.988 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.247 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:20:04.247 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:20:04.247 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:20:04.247 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 343980 00:20:04.247 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 343980 ']' 00:20:04.247 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 343980 00:20:04.247 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:04.247 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:04.248 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 343980 00:20:04.248 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:04.248 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:04.248 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 343980' 00:20:04.248 killing process with pid 343980 00:20:04.248 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 343980 00:20:04.248 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 343980 00:20:04.507 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:04.507 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:04.507 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:20:04.507 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:04.507 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:20:04.507 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:04.507 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:04.507 rmmod nvme_tcp 00:20:04.507 rmmod nvme_fabrics 00:20:04.507 rmmod nvme_keyring 00:20:04.507 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:04.507 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:20:04.507 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:20:04.507 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 366932 ']' 00:20:04.507 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 366932 00:20:04.507 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 366932 ']' 00:20:04.507 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 366932 00:20:04.507 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:04.507 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:04.507 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 366932 00:20:04.507 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:04.507 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:04.507 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 366932' 00:20:04.507 killing process with pid 366932 00:20:04.507 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 366932 00:20:04.507 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 366932 00:20:04.767 20:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:04.767 20:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:04.767 20:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:04.767 20:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:20:04.767 20:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:20:04.767 20:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:04.767 20:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:20:04.767 20:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:04.767 20:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:04.767 20:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.767 20:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:04.767 20:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.311 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:07.311 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.6gY /tmp/spdk.key-sha256.toY /tmp/spdk.key-sha384.txi /tmp/spdk.key-sha512.Caz /tmp/spdk.key-sha512.pnw /tmp/spdk.key-sha384.2VJ /tmp/spdk.key-sha256.NLa '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:07.311 00:20:07.311 real 2m26.946s 00:20:07.311 user 5m35.591s 00:20:07.311 sys 0m23.555s 00:20:07.311 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:07.311 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.311 ************************************ 00:20:07.311 END TEST nvmf_auth_target 00:20:07.311 ************************************ 00:20:07.311 20:40:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:20:07.311 20:40:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:07.311 20:40:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:07.311 20:40:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:07.311 20:40:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:07.311 ************************************ 00:20:07.311 START TEST nvmf_bdevio_no_huge 00:20:07.311 ************************************ 00:20:07.311 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:07.311 * Looking for test storage... 00:20:07.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:07.311 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:07.311 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:20:07.311 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:07.311 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:07.311 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:07.311 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:07.311 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:07.311 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:20:07.311 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:20:07.311 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:20:07.311 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:20:07.311 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:20:07.311 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:20:07.311 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:20:07.311 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:07.311 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:20:07.311 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:20:07.311 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:07.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.312 --rc genhtml_branch_coverage=1 00:20:07.312 --rc genhtml_function_coverage=1 00:20:07.312 --rc genhtml_legend=1 00:20:07.312 --rc geninfo_all_blocks=1 00:20:07.312 --rc geninfo_unexecuted_blocks=1 00:20:07.312 00:20:07.312 ' 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:07.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.312 --rc genhtml_branch_coverage=1 00:20:07.312 --rc genhtml_function_coverage=1 00:20:07.312 --rc genhtml_legend=1 00:20:07.312 --rc geninfo_all_blocks=1 00:20:07.312 --rc geninfo_unexecuted_blocks=1 00:20:07.312 00:20:07.312 ' 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:07.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.312 --rc genhtml_branch_coverage=1 00:20:07.312 --rc genhtml_function_coverage=1 00:20:07.312 --rc genhtml_legend=1 00:20:07.312 --rc geninfo_all_blocks=1 00:20:07.312 --rc geninfo_unexecuted_blocks=1 00:20:07.312 00:20:07.312 ' 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:07.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.312 --rc genhtml_branch_coverage=1 00:20:07.312 --rc genhtml_function_coverage=1 00:20:07.312 --rc genhtml_legend=1 00:20:07.312 --rc geninfo_all_blocks=1 00:20:07.312 --rc geninfo_unexecuted_blocks=1 00:20:07.312 00:20:07.312 ' 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:07.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:07.312 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.313 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:07.313 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:07.313 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:20:07.313 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:13.889 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:13.889 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:13.889 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:13.890 Found net devices under 0000:af:00.0: cvl_0_0 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:13.890 Found net devices under 0000:af:00.1: cvl_0_1 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:13.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:13.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:20:13.890 00:20:13.890 --- 10.0.0.2 ping statistics --- 00:20:13.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.890 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:13.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:13.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:20:13.890 00:20:13.890 --- 10.0.0.1 ping statistics --- 00:20:13.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.890 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=374158 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 374158 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 374158 ']' 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:13.890 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:13.890 [2024-12-05 20:40:06.472350] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:20:13.890 [2024-12-05 20:40:06.472389] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:13.890 [2024-12-05 20:40:06.550394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:13.890 [2024-12-05 20:40:06.594272] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.890 [2024-12-05 20:40:06.594306] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.890 [2024-12-05 20:40:06.594313] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.890 [2024-12-05 20:40:06.594318] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.890 [2024-12-05 20:40:06.594323] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.890 [2024-12-05 20:40:06.595575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:13.890 [2024-12-05 20:40:06.595689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:20:13.890 [2024-12-05 20:40:06.595799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:13.890 [2024-12-05 20:40:06.595801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:20:13.890 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:13.890 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:20:13.890 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:13.890 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:13.891 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:13.891 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.149 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:14.149 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.149 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:14.149 [2024-12-05 20:40:07.336450] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.149 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.149 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:14.149 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.149 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:14.149 Malloc0 00:20:14.149 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.149 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:14.149 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.149 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:14.149 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.149 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:14.149 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.149 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:14.149 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.149 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:14.149 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.149 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:14.149 [2024-12-05 20:40:07.380775] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.149 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.149 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:14.149 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:14.149 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:20:14.149 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:20:14.149 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:14.149 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:14.149 { 00:20:14.149 "params": { 00:20:14.149 "name": "Nvme$subsystem", 00:20:14.149 "trtype": "$TEST_TRANSPORT", 00:20:14.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:14.149 "adrfam": "ipv4", 00:20:14.149 "trsvcid": "$NVMF_PORT", 00:20:14.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:14.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:14.149 "hdgst": ${hdgst:-false}, 00:20:14.149 "ddgst": ${ddgst:-false} 00:20:14.149 }, 00:20:14.149 "method": "bdev_nvme_attach_controller" 00:20:14.149 } 00:20:14.149 EOF 00:20:14.149 )") 00:20:14.149 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:20:14.149 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:20:14.149 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:20:14.149 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:14.149 "params": { 00:20:14.149 "name": "Nvme1", 00:20:14.149 "trtype": "tcp", 00:20:14.149 "traddr": "10.0.0.2", 00:20:14.149 "adrfam": "ipv4", 00:20:14.149 "trsvcid": "4420", 00:20:14.149 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.149 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:14.149 "hdgst": false, 00:20:14.149 "ddgst": false 00:20:14.149 }, 00:20:14.149 "method": "bdev_nvme_attach_controller" 00:20:14.149 }' 00:20:14.149 [2024-12-05 20:40:07.428581] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:20:14.149 [2024-12-05 20:40:07.428624] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid374298 ] 00:20:14.149 [2024-12-05 20:40:07.504698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:14.149 [2024-12-05 20:40:07.549922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.149 [2024-12-05 20:40:07.550034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.149 [2024-12-05 20:40:07.550035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.716 I/O targets: 00:20:14.716 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:14.716 00:20:14.716 00:20:14.716 CUnit - A unit testing framework for C - Version 2.1-3 00:20:14.716 http://cunit.sourceforge.net/ 00:20:14.716 00:20:14.716 00:20:14.716 Suite: bdevio tests on: Nvme1n1 00:20:14.716 Test: blockdev write read block ...passed 00:20:14.716 Test: blockdev write zeroes read block ...passed 00:20:14.716 Test: blockdev write zeroes read no split ...passed 00:20:14.716 Test: blockdev write zeroes read split ...passed 00:20:14.716 Test: blockdev write zeroes read split partial ...passed 00:20:14.716 Test: blockdev reset ...[2024-12-05 20:40:08.036988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:14.716 [2024-12-05 20:40:08.037047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26eab10 (9): Bad file descriptor 00:20:14.716 [2024-12-05 20:40:08.049043] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:20:14.716 passed 00:20:14.716 Test: blockdev write read 8 blocks ...passed 00:20:14.716 Test: blockdev write read size > 128k ...passed 00:20:14.716 Test: blockdev write read invalid size ...passed 00:20:14.716 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:14.716 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:14.716 Test: blockdev write read max offset ...passed 00:20:14.975 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:14.975 Test: blockdev writev readv 8 blocks ...passed 00:20:14.975 Test: blockdev writev readv 30 x 1block ...passed 00:20:14.975 Test: blockdev writev readv block ...passed 00:20:14.975 Test: blockdev writev readv size > 128k ...passed 00:20:14.975 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:14.975 Test: blockdev comparev and writev ...[2024-12-05 20:40:08.259749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:14.975 [2024-12-05 20:40:08.259775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.975 [2024-12-05 20:40:08.259788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:14.975 [2024-12-05 20:40:08.259796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:14.975 [2024-12-05 20:40:08.260038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:14.975 [2024-12-05 20:40:08.260048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:14.975 [2024-12-05 20:40:08.260068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:14.975 [2024-12-05 20:40:08.260076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:14.975 [2024-12-05 20:40:08.260288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:14.975 [2024-12-05 20:40:08.260297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:14.975 [2024-12-05 20:40:08.260307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:14.975 [2024-12-05 20:40:08.260313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:14.975 [2024-12-05 20:40:08.260525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:14.975 [2024-12-05 20:40:08.260534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:14.975 [2024-12-05 20:40:08.260544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:14.975 [2024-12-05 20:40:08.260550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:14.975 passed 00:20:14.975 Test: blockdev nvme passthru rw ...passed 00:20:14.975 Test: blockdev nvme passthru vendor specific ...[2024-12-05 20:40:08.342422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:14.975 [2024-12-05 20:40:08.342438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:14.975 [2024-12-05 20:40:08.342534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:14.975 [2024-12-05 20:40:08.342543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:14.975 [2024-12-05 20:40:08.342645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:14.975 [2024-12-05 20:40:08.342654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:14.975 [2024-12-05 20:40:08.342751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:14.975 [2024-12-05 20:40:08.342760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:14.975 passed 00:20:14.975 Test: blockdev nvme admin passthru ...passed 00:20:14.975 Test: blockdev copy ...passed 00:20:14.975 00:20:14.975 Run Summary: Type Total Ran Passed Failed Inactive 00:20:14.975 suites 1 1 n/a 0 0 00:20:14.975 tests 23 23 23 0 0 00:20:14.975 asserts 152 152 152 0 n/a 00:20:14.975 00:20:14.975 Elapsed time = 1.066 seconds 00:20:15.233 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:15.233 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.233 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:15.233 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.233 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:15.233 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:15.233 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:15.233 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:20:15.233 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:15.233 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:20:15.233 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:15.233 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:15.233 rmmod nvme_tcp 00:20:15.233 rmmod nvme_fabrics 00:20:15.492 rmmod nvme_keyring 00:20:15.492 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:15.492 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:20:15.492 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:20:15.492 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 374158 ']' 00:20:15.492 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 374158 00:20:15.492 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 374158 ']' 00:20:15.492 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 374158 00:20:15.492 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:20:15.492 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:15.492 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 374158 00:20:15.492 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:20:15.492 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:20:15.493 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 374158' 00:20:15.493 killing process with pid 374158 00:20:15.493 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 374158 00:20:15.493 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 374158 00:20:15.752 20:40:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:15.752 20:40:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:15.752 20:40:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:15.752 20:40:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:20:15.752 20:40:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:20:15.752 20:40:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:20:15.752 20:40:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:15.752 20:40:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:15.752 20:40:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:15.752 20:40:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.752 20:40:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:15.752 20:40:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:18.287 00:20:18.287 real 0m10.867s 00:20:18.287 user 0m13.769s 00:20:18.287 sys 0m5.341s 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.287 ************************************ 00:20:18.287 END TEST nvmf_bdevio_no_huge 00:20:18.287 ************************************ 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:18.287 ************************************ 00:20:18.287 START TEST nvmf_tls 00:20:18.287 ************************************ 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:18.287 * Looking for test storage... 00:20:18.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:18.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.287 --rc genhtml_branch_coverage=1 00:20:18.287 --rc genhtml_function_coverage=1 00:20:18.287 --rc genhtml_legend=1 00:20:18.287 --rc geninfo_all_blocks=1 00:20:18.287 --rc geninfo_unexecuted_blocks=1 00:20:18.287 00:20:18.287 ' 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:18.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.287 --rc genhtml_branch_coverage=1 00:20:18.287 --rc genhtml_function_coverage=1 00:20:18.287 --rc genhtml_legend=1 00:20:18.287 --rc geninfo_all_blocks=1 00:20:18.287 --rc geninfo_unexecuted_blocks=1 00:20:18.287 00:20:18.287 ' 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:18.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.287 --rc genhtml_branch_coverage=1 00:20:18.287 --rc genhtml_function_coverage=1 00:20:18.287 --rc genhtml_legend=1 00:20:18.287 --rc geninfo_all_blocks=1 00:20:18.287 --rc geninfo_unexecuted_blocks=1 00:20:18.287 00:20:18.287 ' 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:18.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.287 --rc genhtml_branch_coverage=1 00:20:18.287 --rc genhtml_function_coverage=1 00:20:18.287 --rc genhtml_legend=1 00:20:18.287 --rc geninfo_all_blocks=1 00:20:18.287 --rc geninfo_unexecuted_blocks=1 00:20:18.287 00:20:18.287 ' 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:18.287 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:18.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:20:18.288 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:24.872 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:24.872 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:24.872 Found net devices under 0000:af:00.0: cvl_0_0 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:24.872 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:24.873 Found net devices under 0000:af:00.1: cvl_0_1 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:24.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:24.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.501 ms 00:20:24.873 00:20:24.873 --- 10.0.0.2 ping statistics --- 00:20:24.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.873 rtt min/avg/max/mdev = 0.501/0.501/0.501/0.000 ms 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:24.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:24.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:20:24.873 00:20:24.873 --- 10.0.0.1 ping statistics --- 00:20:24.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.873 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=378192 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 378192 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 378192 ']' 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:24.873 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.873 [2024-12-05 20:40:17.469555] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:20:24.873 [2024-12-05 20:40:17.469599] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.873 [2024-12-05 20:40:17.544792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.873 [2024-12-05 20:40:17.584443] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.873 [2024-12-05 20:40:17.584475] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.873 [2024-12-05 20:40:17.584481] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.873 [2024-12-05 20:40:17.584487] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.873 [2024-12-05 20:40:17.584491] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.873 [2024-12-05 20:40:17.585012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.873 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:24.873 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:24.874 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:24.874 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:24.874 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.133 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.133 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:20:25.133 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:25.133 true 00:20:25.133 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:25.133 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:20:25.393 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:20:25.393 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:20:25.394 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:25.653 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:25.653 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:20:25.653 20:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:20:25.653 20:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:20:25.653 20:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:25.913 20:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:25.913 20:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:20:26.172 20:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:20:26.172 20:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:20:26.172 20:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:26.172 20:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:20:26.172 20:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:20:26.172 20:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:20:26.172 20:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:26.432 20:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:26.432 20:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:26.692 20:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:20:26.692 20:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:20:26.692 20:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:26.692 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:26.692 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:26.951 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:20:26.951 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:20:26.951 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:26.952 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:26.952 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:26.952 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:26.952 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:26.952 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:26.952 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:26.952 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:26.952 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:26.952 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:26.952 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:26.952 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:26.952 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:20:26.952 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:26.952 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:26.952 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:26.952 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:26.952 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.GRhkxAw9Hl 00:20:26.952 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:20:26.952 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.lhIJyUZmFK 00:20:26.952 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:26.952 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:26.952 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.GRhkxAw9Hl 00:20:26.952 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.lhIJyUZmFK 00:20:26.952 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:27.211 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:27.470 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.GRhkxAw9Hl 00:20:27.470 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.GRhkxAw9Hl 00:20:27.470 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:27.730 [2024-12-05 20:40:20.945246] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.730 20:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:27.730 20:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:27.989 [2024-12-05 20:40:21.286113] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:27.989 [2024-12-05 20:40:21.286324] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.989 20:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:28.249 malloc0 00:20:28.249 20:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:28.249 20:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.GRhkxAw9Hl 00:20:28.509 20:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:28.768 20:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.GRhkxAw9Hl 00:20:38.751 Initializing NVMe Controllers 00:20:38.751 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:38.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:38.751 Initialization complete. Launching workers. 00:20:38.751 ======================================================== 00:20:38.751 Latency(us) 00:20:38.751 Device Information : IOPS MiB/s Average min max 00:20:38.751 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18356.47 71.70 3486.58 715.14 5470.99 00:20:38.751 ======================================================== 00:20:38.751 Total : 18356.47 71.70 3486.58 715.14 5470.99 00:20:38.751 00:20:38.751 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GRhkxAw9Hl 00:20:38.751 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:38.751 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:38.751 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:38.751 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.GRhkxAw9Hl 00:20:38.751 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:38.751 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=380868 00:20:38.751 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:38.751 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:38.751 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 380868 /var/tmp/bdevperf.sock 00:20:38.751 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 380868 ']' 00:20:38.751 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:38.751 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:38.751 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:38.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:38.751 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:38.751 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.751 [2024-12-05 20:40:32.137728] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:20:38.751 [2024-12-05 20:40:32.137777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid380868 ] 00:20:39.011 [2024-12-05 20:40:32.210626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.011 [2024-12-05 20:40:32.249341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.011 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:39.011 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:39.011 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GRhkxAw9Hl 00:20:39.270 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:39.270 [2024-12-05 20:40:32.651679] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:39.529 TLSTESTn1 00:20:39.529 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:39.529 Running I/O for 10 seconds... 00:20:41.409 5149.00 IOPS, 20.11 MiB/s [2024-12-05T19:40:36.228Z] 5114.50 IOPS, 19.98 MiB/s [2024-12-05T19:40:37.166Z] 5034.67 IOPS, 19.67 MiB/s [2024-12-05T19:40:38.105Z] 4969.50 IOPS, 19.41 MiB/s [2024-12-05T19:40:39.045Z] 4931.60 IOPS, 19.26 MiB/s [2024-12-05T19:40:39.983Z] 4950.83 IOPS, 19.34 MiB/s [2024-12-05T19:40:40.922Z] 4966.86 IOPS, 19.40 MiB/s [2024-12-05T19:40:41.861Z] 4969.25 IOPS, 19.41 MiB/s [2024-12-05T19:40:43.243Z] 4961.67 IOPS, 19.38 MiB/s [2024-12-05T19:40:43.243Z] 4943.10 IOPS, 19.31 MiB/s 00:20:49.802 Latency(us) 00:20:49.802 [2024-12-05T19:40:43.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.802 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:49.802 Verification LBA range: start 0x0 length 0x2000 00:20:49.802 TLSTESTn1 : 10.02 4947.20 19.33 0.00 0.00 25836.85 4468.36 64821.06 00:20:49.802 [2024-12-05T19:40:43.243Z] =================================================================================================================== 00:20:49.802 [2024-12-05T19:40:43.243Z] Total : 4947.20 19.33 0.00 0.00 25836.85 4468.36 64821.06 00:20:49.802 { 00:20:49.802 "results": [ 00:20:49.802 { 00:20:49.802 "job": "TLSTESTn1", 00:20:49.802 "core_mask": "0x4", 00:20:49.802 "workload": "verify", 00:20:49.802 "status": "finished", 00:20:49.802 "verify_range": { 00:20:49.802 "start": 0, 00:20:49.802 "length": 8192 00:20:49.802 }, 00:20:49.802 "queue_depth": 128, 00:20:49.802 "io_size": 4096, 00:20:49.802 "runtime": 10.01758, 00:20:49.802 "iops": 4947.202817446929, 00:20:49.802 "mibps": 19.325011005652065, 00:20:49.802 "io_failed": 0, 00:20:49.802 "io_timeout": 0, 00:20:49.802 "avg_latency_us": 25836.846309247565, 00:20:49.802 "min_latency_us": 4468.363636363636, 00:20:49.802 "max_latency_us": 64821.06181818182 00:20:49.802 } 00:20:49.802 ], 00:20:49.802 "core_count": 1 00:20:49.802 } 00:20:49.802 20:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:49.802 20:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 380868 00:20:49.802 20:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 380868 ']' 00:20:49.802 20:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 380868 00:20:49.802 20:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:49.802 20:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:49.802 20:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 380868 00:20:49.802 20:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:49.802 20:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:49.802 20:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 380868' 00:20:49.802 killing process with pid 380868 00:20:49.802 20:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 380868 00:20:49.802 Received shutdown signal, test time was about 10.000000 seconds 00:20:49.802 00:20:49.802 Latency(us) 00:20:49.802 [2024-12-05T19:40:43.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.802 [2024-12-05T19:40:43.243Z] =================================================================================================================== 00:20:49.802 [2024-12-05T19:40:43.243Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:49.802 20:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 380868 00:20:49.802 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lhIJyUZmFK 00:20:49.802 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:49.802 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lhIJyUZmFK 00:20:49.802 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:49.802 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:49.802 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:49.802 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:49.802 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lhIJyUZmFK 00:20:49.802 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:49.802 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:49.802 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:49.802 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lhIJyUZmFK 00:20:49.802 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:49.802 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=382871 00:20:49.802 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:49.802 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:49.802 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 382871 /var/tmp/bdevperf.sock 00:20:49.802 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 382871 ']' 00:20:49.802 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:49.802 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:49.802 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:49.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:49.802 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:49.802 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.802 [2024-12-05 20:40:43.135463] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:20:49.802 [2024-12-05 20:40:43.135509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid382871 ] 00:20:49.802 [2024-12-05 20:40:43.202918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.062 [2024-12-05 20:40:43.241860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.062 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.062 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:50.062 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lhIJyUZmFK 00:20:50.062 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:50.322 [2024-12-05 20:40:43.648307] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:50.322 [2024-12-05 20:40:43.655651] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:50.322 [2024-12-05 20:40:43.656574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17732b0 (107): Transport endpoint is not connected 00:20:50.322 [2024-12-05 20:40:43.657568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17732b0 (9): Bad file descriptor 00:20:50.322 [2024-12-05 20:40:43.658570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:50.322 [2024-12-05 20:40:43.658587] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:50.322 [2024-12-05 20:40:43.658594] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:50.322 [2024-12-05 20:40:43.658602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:50.322 request: 00:20:50.322 { 00:20:50.322 "name": "TLSTEST", 00:20:50.322 "trtype": "tcp", 00:20:50.322 "traddr": "10.0.0.2", 00:20:50.322 "adrfam": "ipv4", 00:20:50.322 "trsvcid": "4420", 00:20:50.322 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:50.322 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:50.322 "prchk_reftag": false, 00:20:50.322 "prchk_guard": false, 00:20:50.322 "hdgst": false, 00:20:50.322 "ddgst": false, 00:20:50.322 "psk": "key0", 00:20:50.322 "allow_unrecognized_csi": false, 00:20:50.322 "method": "bdev_nvme_attach_controller", 00:20:50.322 "req_id": 1 00:20:50.322 } 00:20:50.322 Got JSON-RPC error response 00:20:50.322 response: 00:20:50.322 { 00:20:50.322 "code": -5, 00:20:50.322 "message": "Input/output error" 00:20:50.322 } 00:20:50.322 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 382871 00:20:50.322 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 382871 ']' 00:20:50.322 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 382871 00:20:50.322 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:50.322 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.322 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 382871 00:20:50.322 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:50.322 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:50.322 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 382871' 00:20:50.322 killing process with pid 382871 00:20:50.322 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 382871 00:20:50.322 Received shutdown signal, test time was about 10.000000 seconds 00:20:50.322 00:20:50.322 Latency(us) 00:20:50.322 [2024-12-05T19:40:43.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.322 [2024-12-05T19:40:43.763Z] =================================================================================================================== 00:20:50.322 [2024-12-05T19:40:43.763Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:50.322 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 382871 00:20:50.582 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:50.582 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:50.582 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:50.582 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:50.582 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:50.582 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.GRhkxAw9Hl 00:20:50.582 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:50.582 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.GRhkxAw9Hl 00:20:50.582 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:50.582 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:50.582 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:50.582 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:50.582 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.GRhkxAw9Hl 00:20:50.582 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:50.582 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:50.582 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:50.582 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.GRhkxAw9Hl 00:20:50.582 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:50.582 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=382970 00:20:50.582 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:50.582 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:50.582 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 382970 /var/tmp/bdevperf.sock 00:20:50.582 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 382970 ']' 00:20:50.582 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:50.582 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.582 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:50.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:50.582 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.582 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.582 [2024-12-05 20:40:43.923290] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:20:50.582 [2024-12-05 20:40:43.923336] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid382970 ] 00:20:50.582 [2024-12-05 20:40:43.986332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.843 [2024-12-05 20:40:44.025717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.843 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.843 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:50.843 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GRhkxAw9Hl 00:20:51.103 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:20:51.103 [2024-12-05 20:40:44.445478] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:51.103 [2024-12-05 20:40:44.450003] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:51.103 [2024-12-05 20:40:44.450023] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:51.103 [2024-12-05 20:40:44.450046] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:51.103 [2024-12-05 20:40:44.450710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230b2b0 (107): Transport endpoint is not connected 00:20:51.103 [2024-12-05 20:40:44.451702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230b2b0 (9): Bad file descriptor 00:20:51.103 [2024-12-05 20:40:44.452704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:51.103 [2024-12-05 20:40:44.452712] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:51.103 [2024-12-05 20:40:44.452719] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:51.103 [2024-12-05 20:40:44.452728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:51.103 request: 00:20:51.103 { 00:20:51.103 "name": "TLSTEST", 00:20:51.103 "trtype": "tcp", 00:20:51.103 "traddr": "10.0.0.2", 00:20:51.103 "adrfam": "ipv4", 00:20:51.103 "trsvcid": "4420", 00:20:51.103 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.103 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:51.103 "prchk_reftag": false, 00:20:51.103 "prchk_guard": false, 00:20:51.103 "hdgst": false, 00:20:51.103 "ddgst": false, 00:20:51.103 "psk": "key0", 00:20:51.103 "allow_unrecognized_csi": false, 00:20:51.103 "method": "bdev_nvme_attach_controller", 00:20:51.103 "req_id": 1 00:20:51.103 } 00:20:51.103 Got JSON-RPC error response 00:20:51.103 response: 00:20:51.103 { 00:20:51.103 "code": -5, 00:20:51.103 "message": "Input/output error" 00:20:51.103 } 00:20:51.104 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 382970 00:20:51.104 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 382970 ']' 00:20:51.104 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 382970 00:20:51.104 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:51.104 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:51.104 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 382970 00:20:51.104 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:51.104 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:51.104 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 382970' 00:20:51.104 killing process with pid 382970 00:20:51.104 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 382970 00:20:51.104 Received shutdown signal, test time was about 10.000000 seconds 00:20:51.104 00:20:51.104 Latency(us) 00:20:51.104 [2024-12-05T19:40:44.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.104 [2024-12-05T19:40:44.545Z] =================================================================================================================== 00:20:51.104 [2024-12-05T19:40:44.545Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:51.104 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 382970 00:20:51.364 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:51.364 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:51.364 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:51.364 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:51.364 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:51.364 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.GRhkxAw9Hl 00:20:51.364 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:51.364 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.GRhkxAw9Hl 00:20:51.364 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:51.364 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:51.364 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:51.364 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:51.364 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.GRhkxAw9Hl 00:20:51.364 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:51.364 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:51.364 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:51.364 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.GRhkxAw9Hl 00:20:51.364 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:51.364 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=383220 00:20:51.364 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:51.364 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:51.364 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 383220 /var/tmp/bdevperf.sock 00:20:51.364 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 383220 ']' 00:20:51.364 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:51.364 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.364 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:51.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:51.364 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.364 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.364 [2024-12-05 20:40:44.717534] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:20:51.364 [2024-12-05 20:40:44.717579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid383220 ] 00:20:51.364 [2024-12-05 20:40:44.780520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.626 [2024-12-05 20:40:44.820182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:51.626 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.626 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:51.626 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GRhkxAw9Hl 00:20:51.885 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:51.885 [2024-12-05 20:40:45.255900] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:51.885 [2024-12-05 20:40:45.260461] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:51.885 [2024-12-05 20:40:45.260481] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:51.885 [2024-12-05 20:40:45.260509] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:51.885 [2024-12-05 20:40:45.261158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x201f2b0 (107): Transport endpoint is not connected 00:20:51.885 [2024-12-05 20:40:45.262151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x201f2b0 (9): Bad file descriptor 00:20:51.885 [2024-12-05 20:40:45.263152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:20:51.885 [2024-12-05 20:40:45.263161] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:51.885 [2024-12-05 20:40:45.263167] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:20:51.885 [2024-12-05 20:40:45.263177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:20:51.885 request: 00:20:51.885 { 00:20:51.885 "name": "TLSTEST", 00:20:51.885 "trtype": "tcp", 00:20:51.885 "traddr": "10.0.0.2", 00:20:51.885 "adrfam": "ipv4", 00:20:51.885 "trsvcid": "4420", 00:20:51.885 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:51.885 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:51.885 "prchk_reftag": false, 00:20:51.885 "prchk_guard": false, 00:20:51.885 "hdgst": false, 00:20:51.885 "ddgst": false, 00:20:51.885 "psk": "key0", 00:20:51.885 "allow_unrecognized_csi": false, 00:20:51.885 "method": "bdev_nvme_attach_controller", 00:20:51.885 "req_id": 1 00:20:51.885 } 00:20:51.885 Got JSON-RPC error response 00:20:51.885 response: 00:20:51.885 { 00:20:51.885 "code": -5, 00:20:51.885 "message": "Input/output error" 00:20:51.885 } 00:20:51.885 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 383220 00:20:51.885 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 383220 ']' 00:20:51.885 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 383220 00:20:51.885 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:51.885 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:51.885 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 383220 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 383220' 00:20:52.143 killing process with pid 383220 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 383220 00:20:52.143 Received shutdown signal, test time was about 10.000000 seconds 00:20:52.143 00:20:52.143 Latency(us) 00:20:52.143 [2024-12-05T19:40:45.584Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.143 [2024-12-05T19:40:45.584Z] =================================================================================================================== 00:20:52.143 [2024-12-05T19:40:45.584Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 383220 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=383256 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 383256 /var/tmp/bdevperf.sock 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 383256 ']' 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:52.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:52.143 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.143 [2024-12-05 20:40:45.526906] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:20:52.143 [2024-12-05 20:40:45.526953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid383256 ] 00:20:52.401 [2024-12-05 20:40:45.583882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.401 [2024-12-05 20:40:45.623040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.401 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:52.401 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:52.401 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:20:52.660 [2024-12-05 20:40:45.868916] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:20:52.660 [2024-12-05 20:40:45.868942] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:52.660 request: 00:20:52.660 { 00:20:52.660 "name": "key0", 00:20:52.660 "path": "", 00:20:52.660 "method": "keyring_file_add_key", 00:20:52.660 "req_id": 1 00:20:52.660 } 00:20:52.660 Got JSON-RPC error response 00:20:52.660 response: 00:20:52.660 { 00:20:52.660 "code": -1, 00:20:52.660 "message": "Operation not permitted" 00:20:52.660 } 00:20:52.660 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:52.660 [2024-12-05 20:40:46.029408] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:52.660 [2024-12-05 20:40:46.029432] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:52.660 request: 00:20:52.660 { 00:20:52.660 "name": "TLSTEST", 00:20:52.660 "trtype": "tcp", 00:20:52.660 "traddr": "10.0.0.2", 00:20:52.660 "adrfam": "ipv4", 00:20:52.660 "trsvcid": "4420", 00:20:52.660 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:52.660 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:52.660 "prchk_reftag": false, 00:20:52.660 "prchk_guard": false, 00:20:52.660 "hdgst": false, 00:20:52.660 "ddgst": false, 00:20:52.660 "psk": "key0", 00:20:52.660 "allow_unrecognized_csi": false, 00:20:52.660 "method": "bdev_nvme_attach_controller", 00:20:52.660 "req_id": 1 00:20:52.660 } 00:20:52.661 Got JSON-RPC error response 00:20:52.661 response: 00:20:52.661 { 00:20:52.661 "code": -126, 00:20:52.661 "message": "Required key not available" 00:20:52.661 } 00:20:52.661 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 383256 00:20:52.661 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 383256 ']' 00:20:52.661 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 383256 00:20:52.661 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:52.661 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:52.661 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 383256 00:20:52.661 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:52.661 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:52.661 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 383256' 00:20:52.661 killing process with pid 383256 00:20:52.661 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 383256 00:20:52.661 Received shutdown signal, test time was about 10.000000 seconds 00:20:52.661 00:20:52.661 Latency(us) 00:20:52.661 [2024-12-05T19:40:46.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.661 [2024-12-05T19:40:46.102Z] =================================================================================================================== 00:20:52.661 [2024-12-05T19:40:46.102Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:52.661 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 383256 00:20:52.920 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:52.920 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:52.920 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:52.920 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:52.920 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:52.920 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 378192 00:20:52.920 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 378192 ']' 00:20:52.920 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 378192 00:20:52.920 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:52.920 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:52.920 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 378192 00:20:52.920 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:52.920 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:52.920 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 378192' 00:20:52.920 killing process with pid 378192 00:20:52.920 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 378192 00:20:52.920 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 378192 00:20:53.180 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:53.180 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:53.180 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:53.180 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:53.180 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:53.180 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:20:53.180 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:53.180 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:53.180 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:20:53.180 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.Z0E6smaUsB 00:20:53.180 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:53.180 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.Z0E6smaUsB 00:20:53.180 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:20:53.180 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:53.180 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:53.180 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.180 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=383531 00:20:53.180 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 383531 00:20:53.180 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:53.180 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 383531 ']' 00:20:53.180 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.180 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.180 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.180 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.180 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.180 [2024-12-05 20:40:46.549397] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:20:53.180 [2024-12-05 20:40:46.549440] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.440 [2024-12-05 20:40:46.627598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.440 [2024-12-05 20:40:46.659986] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.440 [2024-12-05 20:40:46.660018] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.440 [2024-12-05 20:40:46.660024] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:53.441 [2024-12-05 20:40:46.660029] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:53.441 [2024-12-05 20:40:46.660034] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.441 [2024-12-05 20:40:46.660577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.441 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:53.441 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:53.441 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:53.441 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:53.441 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.441 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.441 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.Z0E6smaUsB 00:20:53.441 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Z0E6smaUsB 00:20:53.441 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:53.700 [2024-12-05 20:40:46.959195] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.700 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:53.959 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:53.959 [2024-12-05 20:40:47.308088] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:53.959 [2024-12-05 20:40:47.308318] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.959 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:54.219 malloc0 00:20:54.219 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:54.219 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Z0E6smaUsB 00:20:54.479 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:54.738 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Z0E6smaUsB 00:20:54.738 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:54.738 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:54.739 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:54.739 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Z0E6smaUsB 00:20:54.739 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:54.739 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=383814 00:20:54.739 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:54.739 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 383814 /var/tmp/bdevperf.sock 00:20:54.739 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 383814 ']' 00:20:54.739 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:54.739 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:54.739 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:54.739 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:54.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:54.739 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:54.739 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.739 [2024-12-05 20:40:48.038674] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:20:54.739 [2024-12-05 20:40:48.038722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid383814 ] 00:20:54.739 [2024-12-05 20:40:48.110744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.739 [2024-12-05 20:40:48.147549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:55.676 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:55.676 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:55.676 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Z0E6smaUsB 00:20:55.676 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:55.934 [2024-12-05 20:40:49.164465] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:55.934 TLSTESTn1 00:20:55.934 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:55.934 Running I/O for 10 seconds... 00:20:58.240 5329.00 IOPS, 20.82 MiB/s [2024-12-05T19:40:52.616Z] 5323.50 IOPS, 20.79 MiB/s [2024-12-05T19:40:53.556Z] 5173.67 IOPS, 20.21 MiB/s [2024-12-05T19:40:54.495Z] 5029.75 IOPS, 19.65 MiB/s [2024-12-05T19:40:55.434Z] 4926.80 IOPS, 19.25 MiB/s [2024-12-05T19:40:56.373Z] 4835.33 IOPS, 18.89 MiB/s [2024-12-05T19:40:57.751Z] 4777.14 IOPS, 18.66 MiB/s [2024-12-05T19:40:58.712Z] 4714.50 IOPS, 18.42 MiB/s [2024-12-05T19:40:59.652Z] 4643.56 IOPS, 18.14 MiB/s [2024-12-05T19:40:59.652Z] 4599.00 IOPS, 17.96 MiB/s 00:21:06.211 Latency(us) 00:21:06.211 [2024-12-05T19:40:59.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.211 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:06.211 Verification LBA range: start 0x0 length 0x2000 00:21:06.211 TLSTESTn1 : 10.02 4602.08 17.98 0.00 0.00 27773.38 4259.84 70063.94 00:21:06.211 [2024-12-05T19:40:59.652Z] =================================================================================================================== 00:21:06.211 [2024-12-05T19:40:59.652Z] Total : 4602.08 17.98 0.00 0.00 27773.38 4259.84 70063.94 00:21:06.211 { 00:21:06.211 "results": [ 00:21:06.211 { 00:21:06.211 "job": "TLSTESTn1", 00:21:06.211 "core_mask": "0x4", 00:21:06.211 "workload": "verify", 00:21:06.211 "status": "finished", 00:21:06.211 "verify_range": { 00:21:06.211 "start": 0, 00:21:06.211 "length": 8192 00:21:06.211 }, 00:21:06.211 "queue_depth": 128, 00:21:06.211 "io_size": 4096, 00:21:06.211 "runtime": 10.020911, 00:21:06.211 "iops": 4602.076597626703, 00:21:06.211 "mibps": 17.97686170947931, 00:21:06.211 "io_failed": 0, 00:21:06.211 "io_timeout": 0, 00:21:06.211 "avg_latency_us": 27773.384171090525, 00:21:06.211 "min_latency_us": 4259.84, 00:21:06.211 "max_latency_us": 70063.94181818182 00:21:06.211 } 00:21:06.211 ], 00:21:06.211 "core_count": 1 00:21:06.211 } 00:21:06.211 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:06.211 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 383814 00:21:06.211 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 383814 ']' 00:21:06.211 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 383814 00:21:06.211 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:06.211 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:06.211 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 383814 00:21:06.211 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:06.211 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:06.211 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 383814' 00:21:06.211 killing process with pid 383814 00:21:06.211 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 383814 00:21:06.211 Received shutdown signal, test time was about 10.000000 seconds 00:21:06.211 00:21:06.211 Latency(us) 00:21:06.211 [2024-12-05T19:40:59.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.211 [2024-12-05T19:40:59.652Z] =================================================================================================================== 00:21:06.211 [2024-12-05T19:40:59.652Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:06.211 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 383814 00:21:06.212 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.Z0E6smaUsB 00:21:06.212 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Z0E6smaUsB 00:21:06.212 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:06.212 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Z0E6smaUsB 00:21:06.212 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:06.212 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:06.212 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:06.212 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:06.212 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Z0E6smaUsB 00:21:06.212 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:06.212 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:06.212 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:06.212 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Z0E6smaUsB 00:21:06.212 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:06.212 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=385916 00:21:06.212 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:06.212 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:06.212 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 385916 /var/tmp/bdevperf.sock 00:21:06.212 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 385916 ']' 00:21:06.212 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:06.212 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:06.212 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:06.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:06.212 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:06.212 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.472 [2024-12-05 20:40:59.659679] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:21:06.472 [2024-12-05 20:40:59.659727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid385916 ] 00:21:06.472 [2024-12-05 20:40:59.724419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.472 [2024-12-05 20:40:59.761181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.472 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:06.472 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:06.472 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Z0E6smaUsB 00:21:06.731 [2024-12-05 20:41:00.007887] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Z0E6smaUsB': 0100666 00:21:06.731 [2024-12-05 20:41:00.007913] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:06.731 request: 00:21:06.731 { 00:21:06.731 "name": "key0", 00:21:06.731 "path": "/tmp/tmp.Z0E6smaUsB", 00:21:06.731 "method": "keyring_file_add_key", 00:21:06.731 "req_id": 1 00:21:06.731 } 00:21:06.731 Got JSON-RPC error response 00:21:06.731 response: 00:21:06.731 { 00:21:06.731 "code": -1, 00:21:06.731 "message": "Operation not permitted" 00:21:06.731 } 00:21:06.731 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:06.990 [2024-12-05 20:41:00.184429] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:06.990 [2024-12-05 20:41:00.184463] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:06.990 request: 00:21:06.990 { 00:21:06.990 "name": "TLSTEST", 00:21:06.990 "trtype": "tcp", 00:21:06.990 "traddr": "10.0.0.2", 00:21:06.990 "adrfam": "ipv4", 00:21:06.990 "trsvcid": "4420", 00:21:06.990 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:06.990 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:06.990 "prchk_reftag": false, 00:21:06.990 "prchk_guard": false, 00:21:06.990 "hdgst": false, 00:21:06.990 "ddgst": false, 00:21:06.990 "psk": "key0", 00:21:06.990 "allow_unrecognized_csi": false, 00:21:06.990 "method": "bdev_nvme_attach_controller", 00:21:06.990 "req_id": 1 00:21:06.990 } 00:21:06.990 Got JSON-RPC error response 00:21:06.990 response: 00:21:06.990 { 00:21:06.990 "code": -126, 00:21:06.990 "message": "Required key not available" 00:21:06.990 } 00:21:06.990 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 385916 00:21:06.990 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 385916 ']' 00:21:06.990 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 385916 00:21:06.990 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:06.990 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:06.990 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 385916 00:21:06.990 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:06.990 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:06.990 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 385916' 00:21:06.990 killing process with pid 385916 00:21:06.990 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 385916 00:21:06.990 Received shutdown signal, test time was about 10.000000 seconds 00:21:06.990 00:21:06.990 Latency(us) 00:21:06.990 [2024-12-05T19:41:00.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.991 [2024-12-05T19:41:00.432Z] =================================================================================================================== 00:21:06.991 [2024-12-05T19:41:00.432Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:06.991 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 385916 00:21:06.991 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:06.991 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:06.991 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:06.991 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:06.991 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:06.991 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 383531 00:21:06.991 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 383531 ']' 00:21:06.991 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 383531 00:21:06.991 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:06.991 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:06.991 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 383531 00:21:07.250 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:07.250 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:07.250 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 383531' 00:21:07.250 killing process with pid 383531 00:21:07.250 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 383531 00:21:07.250 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 383531 00:21:07.250 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:21:07.250 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:07.250 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:07.250 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.250 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=386047 00:21:07.250 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 386047 00:21:07.250 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:07.250 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 386047 ']' 00:21:07.251 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.251 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:07.251 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.251 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:07.251 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.251 [2024-12-05 20:41:00.689229] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:21:07.251 [2024-12-05 20:41:00.689272] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.510 [2024-12-05 20:41:00.757283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.510 [2024-12-05 20:41:00.795028] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.510 [2024-12-05 20:41:00.795066] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.510 [2024-12-05 20:41:00.795073] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.510 [2024-12-05 20:41:00.795079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.510 [2024-12-05 20:41:00.795083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.510 [2024-12-05 20:41:00.795616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.510 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:07.510 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:07.510 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:07.511 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:07.511 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.511 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.511 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.Z0E6smaUsB 00:21:07.511 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:07.511 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Z0E6smaUsB 00:21:07.511 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:21:07.511 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:07.511 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:21:07.511 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:07.511 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.Z0E6smaUsB 00:21:07.511 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Z0E6smaUsB 00:21:07.511 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:07.770 [2024-12-05 20:41:01.083066] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.770 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:08.029 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:08.030 [2024-12-05 20:41:01.447999] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:08.030 [2024-12-05 20:41:01.448189] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:08.289 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:08.289 malloc0 00:21:08.289 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:08.548 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Z0E6smaUsB 00:21:08.811 [2024-12-05 20:41:01.993203] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Z0E6smaUsB': 0100666 00:21:08.811 [2024-12-05 20:41:01.993224] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:08.811 request: 00:21:08.811 { 00:21:08.811 "name": "key0", 00:21:08.811 "path": "/tmp/tmp.Z0E6smaUsB", 00:21:08.811 "method": "keyring_file_add_key", 00:21:08.811 "req_id": 1 00:21:08.811 } 00:21:08.811 Got JSON-RPC error response 00:21:08.811 response: 00:21:08.811 { 00:21:08.811 "code": -1, 00:21:08.811 "message": "Operation not permitted" 00:21:08.811 } 00:21:08.811 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:08.811 [2024-12-05 20:41:02.169678] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:21:08.811 [2024-12-05 20:41:02.169706] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:08.811 request: 00:21:08.811 { 00:21:08.811 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.811 "host": "nqn.2016-06.io.spdk:host1", 00:21:08.811 "psk": "key0", 00:21:08.811 "method": "nvmf_subsystem_add_host", 00:21:08.811 "req_id": 1 00:21:08.811 } 00:21:08.811 Got JSON-RPC error response 00:21:08.811 response: 00:21:08.811 { 00:21:08.811 "code": -32603, 00:21:08.811 "message": "Internal error" 00:21:08.811 } 00:21:08.811 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:08.811 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:08.811 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:08.811 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:08.811 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 386047 00:21:08.811 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 386047 ']' 00:21:08.811 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 386047 00:21:08.811 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:08.811 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:08.811 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 386047 00:21:09.070 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:09.070 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:09.070 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 386047' 00:21:09.070 killing process with pid 386047 00:21:09.070 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 386047 00:21:09.070 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 386047 00:21:09.070 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.Z0E6smaUsB 00:21:09.070 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:21:09.070 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:09.070 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:09.070 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.070 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=386483 00:21:09.070 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 386483 00:21:09.070 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:09.070 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 386483 ']' 00:21:09.070 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.070 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:09.070 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.070 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:09.070 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.070 [2024-12-05 20:41:02.474635] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:21:09.070 [2024-12-05 20:41:02.474678] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.330 [2024-12-05 20:41:02.551546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.330 [2024-12-05 20:41:02.588781] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:09.330 [2024-12-05 20:41:02.588816] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:09.330 [2024-12-05 20:41:02.588822] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:09.330 [2024-12-05 20:41:02.588828] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:09.330 [2024-12-05 20:41:02.588832] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:09.330 [2024-12-05 20:41:02.589397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:09.899 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:09.899 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:09.899 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:09.899 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:09.899 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.899 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:09.899 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.Z0E6smaUsB 00:21:09.899 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Z0E6smaUsB 00:21:09.899 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:10.158 [2024-12-05 20:41:03.483095] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.158 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:10.418 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:10.677 [2024-12-05 20:41:03.864061] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:10.677 [2024-12-05 20:41:03.864276] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.677 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:10.677 malloc0 00:21:10.677 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:10.936 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Z0E6smaUsB 00:21:11.195 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:11.196 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:11.196 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=386780 00:21:11.196 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:11.196 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 386780 /var/tmp/bdevperf.sock 00:21:11.196 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 386780 ']' 00:21:11.196 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:11.196 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:11.196 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:11.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:11.196 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:11.196 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.455 [2024-12-05 20:41:04.650516] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:21:11.455 [2024-12-05 20:41:04.650558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid386780 ] 00:21:11.455 [2024-12-05 20:41:04.725742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.455 [2024-12-05 20:41:04.764602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.455 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.455 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:11.455 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Z0E6smaUsB 00:21:11.714 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:11.973 [2024-12-05 20:41:05.224019] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:11.973 TLSTESTn1 00:21:11.973 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:12.230 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:21:12.230 "subsystems": [ 00:21:12.230 { 00:21:12.230 "subsystem": "keyring", 00:21:12.230 "config": [ 00:21:12.230 { 00:21:12.230 "method": "keyring_file_add_key", 00:21:12.230 "params": { 00:21:12.230 "name": "key0", 00:21:12.230 "path": "/tmp/tmp.Z0E6smaUsB" 00:21:12.230 } 00:21:12.230 } 00:21:12.230 ] 00:21:12.230 }, 00:21:12.230 { 00:21:12.230 "subsystem": "iobuf", 00:21:12.230 "config": [ 00:21:12.230 { 00:21:12.230 "method": "iobuf_set_options", 00:21:12.230 "params": { 00:21:12.230 "small_pool_count": 8192, 00:21:12.230 "large_pool_count": 1024, 00:21:12.230 "small_bufsize": 8192, 00:21:12.230 "large_bufsize": 135168, 00:21:12.230 "enable_numa": false 00:21:12.230 } 00:21:12.230 } 00:21:12.230 ] 00:21:12.230 }, 00:21:12.230 { 00:21:12.230 "subsystem": "sock", 00:21:12.230 "config": [ 00:21:12.230 { 00:21:12.230 "method": "sock_set_default_impl", 00:21:12.230 "params": { 00:21:12.230 "impl_name": "posix" 00:21:12.230 } 00:21:12.230 }, 00:21:12.230 { 00:21:12.230 "method": "sock_impl_set_options", 00:21:12.230 "params": { 00:21:12.230 "impl_name": "ssl", 00:21:12.230 "recv_buf_size": 4096, 00:21:12.230 "send_buf_size": 4096, 00:21:12.230 "enable_recv_pipe": true, 00:21:12.230 "enable_quickack": false, 00:21:12.230 "enable_placement_id": 0, 00:21:12.230 "enable_zerocopy_send_server": true, 00:21:12.230 "enable_zerocopy_send_client": false, 00:21:12.230 "zerocopy_threshold": 0, 00:21:12.230 "tls_version": 0, 00:21:12.230 "enable_ktls": false 00:21:12.230 } 00:21:12.230 }, 00:21:12.230 { 00:21:12.230 "method": "sock_impl_set_options", 00:21:12.230 "params": { 00:21:12.230 "impl_name": "posix", 00:21:12.230 "recv_buf_size": 2097152, 00:21:12.230 "send_buf_size": 2097152, 00:21:12.230 "enable_recv_pipe": true, 00:21:12.230 "enable_quickack": false, 00:21:12.230 "enable_placement_id": 0, 00:21:12.230 "enable_zerocopy_send_server": true, 00:21:12.230 "enable_zerocopy_send_client": false, 00:21:12.230 "zerocopy_threshold": 0, 00:21:12.230 "tls_version": 0, 00:21:12.230 "enable_ktls": false 00:21:12.230 } 00:21:12.230 } 00:21:12.230 ] 00:21:12.230 }, 00:21:12.230 { 00:21:12.230 "subsystem": "vmd", 00:21:12.230 "config": [] 00:21:12.230 }, 00:21:12.230 { 00:21:12.230 "subsystem": "accel", 00:21:12.230 "config": [ 00:21:12.230 { 00:21:12.230 "method": "accel_set_options", 00:21:12.230 "params": { 00:21:12.230 "small_cache_size": 128, 00:21:12.230 "large_cache_size": 16, 00:21:12.230 "task_count": 2048, 00:21:12.230 "sequence_count": 2048, 00:21:12.230 "buf_count": 2048 00:21:12.230 } 00:21:12.230 } 00:21:12.230 ] 00:21:12.230 }, 00:21:12.230 { 00:21:12.230 "subsystem": "bdev", 00:21:12.230 "config": [ 00:21:12.230 { 00:21:12.230 "method": "bdev_set_options", 00:21:12.230 "params": { 00:21:12.230 "bdev_io_pool_size": 65535, 00:21:12.230 "bdev_io_cache_size": 256, 00:21:12.230 "bdev_auto_examine": true, 00:21:12.230 "iobuf_small_cache_size": 128, 00:21:12.230 "iobuf_large_cache_size": 16 00:21:12.230 } 00:21:12.230 }, 00:21:12.230 { 00:21:12.230 "method": "bdev_raid_set_options", 00:21:12.230 "params": { 00:21:12.230 "process_window_size_kb": 1024, 00:21:12.230 "process_max_bandwidth_mb_sec": 0 00:21:12.230 } 00:21:12.230 }, 00:21:12.230 { 00:21:12.230 "method": "bdev_iscsi_set_options", 00:21:12.230 "params": { 00:21:12.230 "timeout_sec": 30 00:21:12.230 } 00:21:12.230 }, 00:21:12.230 { 00:21:12.230 "method": "bdev_nvme_set_options", 00:21:12.230 "params": { 00:21:12.230 "action_on_timeout": "none", 00:21:12.230 "timeout_us": 0, 00:21:12.230 "timeout_admin_us": 0, 00:21:12.230 "keep_alive_timeout_ms": 10000, 00:21:12.230 "arbitration_burst": 0, 00:21:12.230 "low_priority_weight": 0, 00:21:12.230 "medium_priority_weight": 0, 00:21:12.230 "high_priority_weight": 0, 00:21:12.230 "nvme_adminq_poll_period_us": 10000, 00:21:12.230 "nvme_ioq_poll_period_us": 0, 00:21:12.230 "io_queue_requests": 0, 00:21:12.230 "delay_cmd_submit": true, 00:21:12.230 "transport_retry_count": 4, 00:21:12.230 "bdev_retry_count": 3, 00:21:12.230 "transport_ack_timeout": 0, 00:21:12.230 "ctrlr_loss_timeout_sec": 0, 00:21:12.230 "reconnect_delay_sec": 0, 00:21:12.230 "fast_io_fail_timeout_sec": 0, 00:21:12.230 "disable_auto_failback": false, 00:21:12.230 "generate_uuids": false, 00:21:12.230 "transport_tos": 0, 00:21:12.230 "nvme_error_stat": false, 00:21:12.230 "rdma_srq_size": 0, 00:21:12.230 "io_path_stat": false, 00:21:12.230 "allow_accel_sequence": false, 00:21:12.230 "rdma_max_cq_size": 0, 00:21:12.230 "rdma_cm_event_timeout_ms": 0, 00:21:12.230 "dhchap_digests": [ 00:21:12.230 "sha256", 00:21:12.230 "sha384", 00:21:12.230 "sha512" 00:21:12.230 ], 00:21:12.230 "dhchap_dhgroups": [ 00:21:12.230 "null", 00:21:12.230 "ffdhe2048", 00:21:12.230 "ffdhe3072", 00:21:12.230 "ffdhe4096", 00:21:12.231 "ffdhe6144", 00:21:12.231 "ffdhe8192" 00:21:12.231 ] 00:21:12.231 } 00:21:12.231 }, 00:21:12.231 { 00:21:12.231 "method": "bdev_nvme_set_hotplug", 00:21:12.231 "params": { 00:21:12.231 "period_us": 100000, 00:21:12.231 "enable": false 00:21:12.231 } 00:21:12.231 }, 00:21:12.231 { 00:21:12.231 "method": "bdev_malloc_create", 00:21:12.231 "params": { 00:21:12.231 "name": "malloc0", 00:21:12.231 "num_blocks": 8192, 00:21:12.231 "block_size": 4096, 00:21:12.231 "physical_block_size": 4096, 00:21:12.231 "uuid": "2f922cb0-84ad-48ec-a2f8-29fa9baff454", 00:21:12.231 "optimal_io_boundary": 0, 00:21:12.231 "md_size": 0, 00:21:12.231 "dif_type": 0, 00:21:12.231 "dif_is_head_of_md": false, 00:21:12.231 "dif_pi_format": 0 00:21:12.231 } 00:21:12.231 }, 00:21:12.231 { 00:21:12.231 "method": "bdev_wait_for_examine" 00:21:12.231 } 00:21:12.231 ] 00:21:12.231 }, 00:21:12.231 { 00:21:12.231 "subsystem": "nbd", 00:21:12.231 "config": [] 00:21:12.231 }, 00:21:12.231 { 00:21:12.231 "subsystem": "scheduler", 00:21:12.231 "config": [ 00:21:12.231 { 00:21:12.231 "method": "framework_set_scheduler", 00:21:12.231 "params": { 00:21:12.231 "name": "static" 00:21:12.231 } 00:21:12.231 } 00:21:12.231 ] 00:21:12.231 }, 00:21:12.231 { 00:21:12.231 "subsystem": "nvmf", 00:21:12.231 "config": [ 00:21:12.231 { 00:21:12.231 "method": "nvmf_set_config", 00:21:12.231 "params": { 00:21:12.231 "discovery_filter": "match_any", 00:21:12.231 "admin_cmd_passthru": { 00:21:12.231 "identify_ctrlr": false 00:21:12.231 }, 00:21:12.231 "dhchap_digests": [ 00:21:12.231 "sha256", 00:21:12.231 "sha384", 00:21:12.231 "sha512" 00:21:12.231 ], 00:21:12.231 "dhchap_dhgroups": [ 00:21:12.231 "null", 00:21:12.231 "ffdhe2048", 00:21:12.231 "ffdhe3072", 00:21:12.231 "ffdhe4096", 00:21:12.231 "ffdhe6144", 00:21:12.231 "ffdhe8192" 00:21:12.231 ] 00:21:12.231 } 00:21:12.231 }, 00:21:12.231 { 00:21:12.231 "method": "nvmf_set_max_subsystems", 00:21:12.231 "params": { 00:21:12.231 "max_subsystems": 1024 00:21:12.231 } 00:21:12.231 }, 00:21:12.231 { 00:21:12.231 "method": "nvmf_set_crdt", 00:21:12.231 "params": { 00:21:12.231 "crdt1": 0, 00:21:12.231 "crdt2": 0, 00:21:12.231 "crdt3": 0 00:21:12.231 } 00:21:12.231 }, 00:21:12.231 { 00:21:12.231 "method": "nvmf_create_transport", 00:21:12.231 "params": { 00:21:12.231 "trtype": "TCP", 00:21:12.231 "max_queue_depth": 128, 00:21:12.231 "max_io_qpairs_per_ctrlr": 127, 00:21:12.231 "in_capsule_data_size": 4096, 00:21:12.231 "max_io_size": 131072, 00:21:12.231 "io_unit_size": 131072, 00:21:12.231 "max_aq_depth": 128, 00:21:12.231 "num_shared_buffers": 511, 00:21:12.231 "buf_cache_size": 4294967295, 00:21:12.231 "dif_insert_or_strip": false, 00:21:12.231 "zcopy": false, 00:21:12.231 "c2h_success": false, 00:21:12.231 "sock_priority": 0, 00:21:12.231 "abort_timeout_sec": 1, 00:21:12.231 "ack_timeout": 0, 00:21:12.231 "data_wr_pool_size": 0 00:21:12.231 } 00:21:12.231 }, 00:21:12.231 { 00:21:12.231 "method": "nvmf_create_subsystem", 00:21:12.231 "params": { 00:21:12.231 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.231 "allow_any_host": false, 00:21:12.231 "serial_number": "SPDK00000000000001", 00:21:12.231 "model_number": "SPDK bdev Controller", 00:21:12.231 "max_namespaces": 10, 00:21:12.231 "min_cntlid": 1, 00:21:12.231 "max_cntlid": 65519, 00:21:12.231 "ana_reporting": false 00:21:12.231 } 00:21:12.231 }, 00:21:12.231 { 00:21:12.231 "method": "nvmf_subsystem_add_host", 00:21:12.231 "params": { 00:21:12.231 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.231 "host": "nqn.2016-06.io.spdk:host1", 00:21:12.231 "psk": "key0" 00:21:12.231 } 00:21:12.231 }, 00:21:12.231 { 00:21:12.231 "method": "nvmf_subsystem_add_ns", 00:21:12.231 "params": { 00:21:12.231 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.231 "namespace": { 00:21:12.231 "nsid": 1, 00:21:12.231 "bdev_name": "malloc0", 00:21:12.231 "nguid": "2F922CB084AD48ECA2F829FA9BAFF454", 00:21:12.231 "uuid": "2f922cb0-84ad-48ec-a2f8-29fa9baff454", 00:21:12.231 "no_auto_visible": false 00:21:12.231 } 00:21:12.231 } 00:21:12.231 }, 00:21:12.231 { 00:21:12.231 "method": "nvmf_subsystem_add_listener", 00:21:12.231 "params": { 00:21:12.231 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.231 "listen_address": { 00:21:12.231 "trtype": "TCP", 00:21:12.231 "adrfam": "IPv4", 00:21:12.231 "traddr": "10.0.0.2", 00:21:12.231 "trsvcid": "4420" 00:21:12.231 }, 00:21:12.231 "secure_channel": true 00:21:12.231 } 00:21:12.231 } 00:21:12.231 ] 00:21:12.231 } 00:21:12.231 ] 00:21:12.231 }' 00:21:12.231 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:12.489 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:21:12.489 "subsystems": [ 00:21:12.489 { 00:21:12.489 "subsystem": "keyring", 00:21:12.489 "config": [ 00:21:12.489 { 00:21:12.489 "method": "keyring_file_add_key", 00:21:12.489 "params": { 00:21:12.489 "name": "key0", 00:21:12.489 "path": "/tmp/tmp.Z0E6smaUsB" 00:21:12.489 } 00:21:12.489 } 00:21:12.489 ] 00:21:12.489 }, 00:21:12.489 { 00:21:12.489 "subsystem": "iobuf", 00:21:12.489 "config": [ 00:21:12.489 { 00:21:12.489 "method": "iobuf_set_options", 00:21:12.489 "params": { 00:21:12.489 "small_pool_count": 8192, 00:21:12.489 "large_pool_count": 1024, 00:21:12.489 "small_bufsize": 8192, 00:21:12.489 "large_bufsize": 135168, 00:21:12.489 "enable_numa": false 00:21:12.489 } 00:21:12.489 } 00:21:12.489 ] 00:21:12.489 }, 00:21:12.489 { 00:21:12.489 "subsystem": "sock", 00:21:12.489 "config": [ 00:21:12.489 { 00:21:12.489 "method": "sock_set_default_impl", 00:21:12.489 "params": { 00:21:12.489 "impl_name": "posix" 00:21:12.489 } 00:21:12.489 }, 00:21:12.489 { 00:21:12.489 "method": "sock_impl_set_options", 00:21:12.489 "params": { 00:21:12.489 "impl_name": "ssl", 00:21:12.489 "recv_buf_size": 4096, 00:21:12.489 "send_buf_size": 4096, 00:21:12.489 "enable_recv_pipe": true, 00:21:12.489 "enable_quickack": false, 00:21:12.489 "enable_placement_id": 0, 00:21:12.489 "enable_zerocopy_send_server": true, 00:21:12.489 "enable_zerocopy_send_client": false, 00:21:12.489 "zerocopy_threshold": 0, 00:21:12.489 "tls_version": 0, 00:21:12.489 "enable_ktls": false 00:21:12.489 } 00:21:12.489 }, 00:21:12.489 { 00:21:12.489 "method": "sock_impl_set_options", 00:21:12.489 "params": { 00:21:12.489 "impl_name": "posix", 00:21:12.489 "recv_buf_size": 2097152, 00:21:12.489 "send_buf_size": 2097152, 00:21:12.489 "enable_recv_pipe": true, 00:21:12.489 "enable_quickack": false, 00:21:12.489 "enable_placement_id": 0, 00:21:12.489 "enable_zerocopy_send_server": true, 00:21:12.489 "enable_zerocopy_send_client": false, 00:21:12.489 "zerocopy_threshold": 0, 00:21:12.489 "tls_version": 0, 00:21:12.489 "enable_ktls": false 00:21:12.489 } 00:21:12.489 } 00:21:12.489 ] 00:21:12.489 }, 00:21:12.489 { 00:21:12.489 "subsystem": "vmd", 00:21:12.489 "config": [] 00:21:12.489 }, 00:21:12.489 { 00:21:12.489 "subsystem": "accel", 00:21:12.489 "config": [ 00:21:12.489 { 00:21:12.489 "method": "accel_set_options", 00:21:12.489 "params": { 00:21:12.489 "small_cache_size": 128, 00:21:12.489 "large_cache_size": 16, 00:21:12.489 "task_count": 2048, 00:21:12.489 "sequence_count": 2048, 00:21:12.489 "buf_count": 2048 00:21:12.489 } 00:21:12.489 } 00:21:12.489 ] 00:21:12.489 }, 00:21:12.489 { 00:21:12.489 "subsystem": "bdev", 00:21:12.489 "config": [ 00:21:12.489 { 00:21:12.489 "method": "bdev_set_options", 00:21:12.489 "params": { 00:21:12.489 "bdev_io_pool_size": 65535, 00:21:12.489 "bdev_io_cache_size": 256, 00:21:12.489 "bdev_auto_examine": true, 00:21:12.489 "iobuf_small_cache_size": 128, 00:21:12.489 "iobuf_large_cache_size": 16 00:21:12.489 } 00:21:12.489 }, 00:21:12.489 { 00:21:12.489 "method": "bdev_raid_set_options", 00:21:12.489 "params": { 00:21:12.489 "process_window_size_kb": 1024, 00:21:12.489 "process_max_bandwidth_mb_sec": 0 00:21:12.489 } 00:21:12.489 }, 00:21:12.489 { 00:21:12.489 "method": "bdev_iscsi_set_options", 00:21:12.489 "params": { 00:21:12.489 "timeout_sec": 30 00:21:12.489 } 00:21:12.489 }, 00:21:12.489 { 00:21:12.489 "method": "bdev_nvme_set_options", 00:21:12.489 "params": { 00:21:12.489 "action_on_timeout": "none", 00:21:12.489 "timeout_us": 0, 00:21:12.489 "timeout_admin_us": 0, 00:21:12.489 "keep_alive_timeout_ms": 10000, 00:21:12.489 "arbitration_burst": 0, 00:21:12.489 "low_priority_weight": 0, 00:21:12.489 "medium_priority_weight": 0, 00:21:12.489 "high_priority_weight": 0, 00:21:12.489 "nvme_adminq_poll_period_us": 10000, 00:21:12.489 "nvme_ioq_poll_period_us": 0, 00:21:12.489 "io_queue_requests": 512, 00:21:12.489 "delay_cmd_submit": true, 00:21:12.489 "transport_retry_count": 4, 00:21:12.489 "bdev_retry_count": 3, 00:21:12.489 "transport_ack_timeout": 0, 00:21:12.489 "ctrlr_loss_timeout_sec": 0, 00:21:12.489 "reconnect_delay_sec": 0, 00:21:12.489 "fast_io_fail_timeout_sec": 0, 00:21:12.489 "disable_auto_failback": false, 00:21:12.489 "generate_uuids": false, 00:21:12.489 "transport_tos": 0, 00:21:12.489 "nvme_error_stat": false, 00:21:12.489 "rdma_srq_size": 0, 00:21:12.489 "io_path_stat": false, 00:21:12.489 "allow_accel_sequence": false, 00:21:12.489 "rdma_max_cq_size": 0, 00:21:12.489 "rdma_cm_event_timeout_ms": 0, 00:21:12.489 "dhchap_digests": [ 00:21:12.489 "sha256", 00:21:12.489 "sha384", 00:21:12.489 "sha512" 00:21:12.489 ], 00:21:12.489 "dhchap_dhgroups": [ 00:21:12.489 "null", 00:21:12.489 "ffdhe2048", 00:21:12.489 "ffdhe3072", 00:21:12.489 "ffdhe4096", 00:21:12.489 "ffdhe6144", 00:21:12.489 "ffdhe8192" 00:21:12.489 ] 00:21:12.489 } 00:21:12.489 }, 00:21:12.489 { 00:21:12.489 "method": "bdev_nvme_attach_controller", 00:21:12.489 "params": { 00:21:12.489 "name": "TLSTEST", 00:21:12.489 "trtype": "TCP", 00:21:12.489 "adrfam": "IPv4", 00:21:12.489 "traddr": "10.0.0.2", 00:21:12.489 "trsvcid": "4420", 00:21:12.489 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.489 "prchk_reftag": false, 00:21:12.489 "prchk_guard": false, 00:21:12.489 "ctrlr_loss_timeout_sec": 0, 00:21:12.489 "reconnect_delay_sec": 0, 00:21:12.489 "fast_io_fail_timeout_sec": 0, 00:21:12.489 "psk": "key0", 00:21:12.489 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:12.489 "hdgst": false, 00:21:12.489 "ddgst": false, 00:21:12.489 "multipath": "multipath" 00:21:12.489 } 00:21:12.489 }, 00:21:12.489 { 00:21:12.489 "method": "bdev_nvme_set_hotplug", 00:21:12.489 "params": { 00:21:12.489 "period_us": 100000, 00:21:12.489 "enable": false 00:21:12.489 } 00:21:12.489 }, 00:21:12.489 { 00:21:12.489 "method": "bdev_wait_for_examine" 00:21:12.489 } 00:21:12.489 ] 00:21:12.489 }, 00:21:12.489 { 00:21:12.489 "subsystem": "nbd", 00:21:12.489 "config": [] 00:21:12.489 } 00:21:12.489 ] 00:21:12.489 }' 00:21:12.489 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 386780 00:21:12.489 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 386780 ']' 00:21:12.489 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 386780 00:21:12.489 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:12.490 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:12.490 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 386780 00:21:12.490 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:12.490 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:12.490 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 386780' 00:21:12.490 killing process with pid 386780 00:21:12.490 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 386780 00:21:12.490 Received shutdown signal, test time was about 10.000000 seconds 00:21:12.490 00:21:12.490 Latency(us) 00:21:12.490 [2024-12-05T19:41:05.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.490 [2024-12-05T19:41:05.931Z] =================================================================================================================== 00:21:12.490 [2024-12-05T19:41:05.931Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:12.490 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 386780 00:21:12.748 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 386483 00:21:12.748 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 386483 ']' 00:21:12.748 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 386483 00:21:12.748 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:12.748 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:12.748 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 386483 00:21:12.748 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:12.748 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:12.748 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 386483' 00:21:12.748 killing process with pid 386483 00:21:12.748 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 386483 00:21:12.748 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 386483 00:21:13.007 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:13.007 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:13.007 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:13.008 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.008 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:21:13.008 "subsystems": [ 00:21:13.008 { 00:21:13.008 "subsystem": "keyring", 00:21:13.008 "config": [ 00:21:13.008 { 00:21:13.008 "method": "keyring_file_add_key", 00:21:13.008 "params": { 00:21:13.008 "name": "key0", 00:21:13.008 "path": "/tmp/tmp.Z0E6smaUsB" 00:21:13.008 } 00:21:13.008 } 00:21:13.008 ] 00:21:13.008 }, 00:21:13.008 { 00:21:13.008 "subsystem": "iobuf", 00:21:13.008 "config": [ 00:21:13.008 { 00:21:13.008 "method": "iobuf_set_options", 00:21:13.008 "params": { 00:21:13.008 "small_pool_count": 8192, 00:21:13.008 "large_pool_count": 1024, 00:21:13.008 "small_bufsize": 8192, 00:21:13.008 "large_bufsize": 135168, 00:21:13.008 "enable_numa": false 00:21:13.008 } 00:21:13.008 } 00:21:13.008 ] 00:21:13.008 }, 00:21:13.008 { 00:21:13.008 "subsystem": "sock", 00:21:13.008 "config": [ 00:21:13.008 { 00:21:13.008 "method": "sock_set_default_impl", 00:21:13.008 "params": { 00:21:13.008 "impl_name": "posix" 00:21:13.008 } 00:21:13.008 }, 00:21:13.008 { 00:21:13.008 "method": "sock_impl_set_options", 00:21:13.008 "params": { 00:21:13.008 "impl_name": "ssl", 00:21:13.008 "recv_buf_size": 4096, 00:21:13.008 "send_buf_size": 4096, 00:21:13.008 "enable_recv_pipe": true, 00:21:13.008 "enable_quickack": false, 00:21:13.008 "enable_placement_id": 0, 00:21:13.008 "enable_zerocopy_send_server": true, 00:21:13.008 "enable_zerocopy_send_client": false, 00:21:13.008 "zerocopy_threshold": 0, 00:21:13.008 "tls_version": 0, 00:21:13.008 "enable_ktls": false 00:21:13.008 } 00:21:13.008 }, 00:21:13.008 { 00:21:13.008 "method": "sock_impl_set_options", 00:21:13.008 "params": { 00:21:13.008 "impl_name": "posix", 00:21:13.008 "recv_buf_size": 2097152, 00:21:13.008 "send_buf_size": 2097152, 00:21:13.008 "enable_recv_pipe": true, 00:21:13.008 "enable_quickack": false, 00:21:13.008 "enable_placement_id": 0, 00:21:13.008 "enable_zerocopy_send_server": true, 00:21:13.008 "enable_zerocopy_send_client": false, 00:21:13.008 "zerocopy_threshold": 0, 00:21:13.008 "tls_version": 0, 00:21:13.008 "enable_ktls": false 00:21:13.008 } 00:21:13.008 } 00:21:13.008 ] 00:21:13.008 }, 00:21:13.008 { 00:21:13.008 "subsystem": "vmd", 00:21:13.008 "config": [] 00:21:13.008 }, 00:21:13.008 { 00:21:13.008 "subsystem": "accel", 00:21:13.008 "config": [ 00:21:13.008 { 00:21:13.008 "method": "accel_set_options", 00:21:13.008 "params": { 00:21:13.008 "small_cache_size": 128, 00:21:13.008 "large_cache_size": 16, 00:21:13.008 "task_count": 2048, 00:21:13.008 "sequence_count": 2048, 00:21:13.008 "buf_count": 2048 00:21:13.008 } 00:21:13.008 } 00:21:13.008 ] 00:21:13.008 }, 00:21:13.008 { 00:21:13.008 "subsystem": "bdev", 00:21:13.008 "config": [ 00:21:13.008 { 00:21:13.008 "method": "bdev_set_options", 00:21:13.008 "params": { 00:21:13.008 "bdev_io_pool_size": 65535, 00:21:13.008 "bdev_io_cache_size": 256, 00:21:13.008 "bdev_auto_examine": true, 00:21:13.008 "iobuf_small_cache_size": 128, 00:21:13.008 "iobuf_large_cache_size": 16 00:21:13.008 } 00:21:13.008 }, 00:21:13.008 { 00:21:13.008 "method": "bdev_raid_set_options", 00:21:13.008 "params": { 00:21:13.008 "process_window_size_kb": 1024, 00:21:13.008 "process_max_bandwidth_mb_sec": 0 00:21:13.008 } 00:21:13.008 }, 00:21:13.008 { 00:21:13.008 "method": "bdev_iscsi_set_options", 00:21:13.008 "params": { 00:21:13.008 "timeout_sec": 30 00:21:13.008 } 00:21:13.008 }, 00:21:13.008 { 00:21:13.008 "method": "bdev_nvme_set_options", 00:21:13.008 "params": { 00:21:13.008 "action_on_timeout": "none", 00:21:13.008 "timeout_us": 0, 00:21:13.008 "timeout_admin_us": 0, 00:21:13.008 "keep_alive_timeout_ms": 10000, 00:21:13.008 "arbitration_burst": 0, 00:21:13.008 "low_priority_weight": 0, 00:21:13.008 "medium_priority_weight": 0, 00:21:13.008 "high_priority_weight": 0, 00:21:13.008 "nvme_adminq_poll_period_us": 10000, 00:21:13.008 "nvme_ioq_poll_period_us": 0, 00:21:13.008 "io_queue_requests": 0, 00:21:13.008 "delay_cmd_submit": true, 00:21:13.008 "transport_retry_count": 4, 00:21:13.008 "bdev_retry_count": 3, 00:21:13.008 "transport_ack_timeout": 0, 00:21:13.008 "ctrlr_loss_timeout_sec": 0, 00:21:13.008 "reconnect_delay_sec": 0, 00:21:13.008 "fast_io_fail_timeout_sec": 0, 00:21:13.008 "disable_auto_failback": false, 00:21:13.008 "generate_uuids": false, 00:21:13.008 "transport_tos": 0, 00:21:13.008 "nvme_error_stat": false, 00:21:13.008 "rdma_srq_size": 0, 00:21:13.008 "io_path_stat": false, 00:21:13.008 "allow_accel_sequence": false, 00:21:13.008 "rdma_max_cq_size": 0, 00:21:13.008 "rdma_cm_event_timeout_ms": 0, 00:21:13.008 "dhchap_digests": [ 00:21:13.008 "sha256", 00:21:13.008 "sha384", 00:21:13.008 "sha512" 00:21:13.008 ], 00:21:13.008 "dhchap_dhgroups": [ 00:21:13.008 "null", 00:21:13.008 "ffdhe2048", 00:21:13.008 "ffdhe3072", 00:21:13.008 "ffdhe4096", 00:21:13.008 "ffdhe6144", 00:21:13.008 "ffdhe8192" 00:21:13.008 ] 00:21:13.008 } 00:21:13.008 }, 00:21:13.008 { 00:21:13.008 "method": "bdev_nvme_set_hotplug", 00:21:13.008 "params": { 00:21:13.008 "period_us": 100000, 00:21:13.008 "enable": false 00:21:13.008 } 00:21:13.008 }, 00:21:13.008 { 00:21:13.008 "method": "bdev_malloc_create", 00:21:13.008 "params": { 00:21:13.008 "name": "malloc0", 00:21:13.008 "num_blocks": 8192, 00:21:13.008 "block_size": 4096, 00:21:13.008 "physical_block_size": 4096, 00:21:13.008 "uuid": "2f922cb0-84ad-48ec-a2f8-29fa9baff454", 00:21:13.008 "optimal_io_boundary": 0, 00:21:13.008 "md_size": 0, 00:21:13.008 "dif_type": 0, 00:21:13.008 "dif_is_head_of_md": false, 00:21:13.008 "dif_pi_format": 0 00:21:13.008 } 00:21:13.008 }, 00:21:13.008 { 00:21:13.008 "method": "bdev_wait_for_examine" 00:21:13.008 } 00:21:13.008 ] 00:21:13.008 }, 00:21:13.008 { 00:21:13.008 "subsystem": "nbd", 00:21:13.008 "config": [] 00:21:13.008 }, 00:21:13.008 { 00:21:13.008 "subsystem": "scheduler", 00:21:13.008 "config": [ 00:21:13.008 { 00:21:13.008 "method": "framework_set_scheduler", 00:21:13.008 "params": { 00:21:13.008 "name": "static" 00:21:13.008 } 00:21:13.008 } 00:21:13.008 ] 00:21:13.008 }, 00:21:13.008 { 00:21:13.008 "subsystem": "nvmf", 00:21:13.008 "config": [ 00:21:13.008 { 00:21:13.008 "method": "nvmf_set_config", 00:21:13.008 "params": { 00:21:13.008 "discovery_filter": "match_any", 00:21:13.008 "admin_cmd_passthru": { 00:21:13.008 "identify_ctrlr": false 00:21:13.008 }, 00:21:13.008 "dhchap_digests": [ 00:21:13.008 "sha256", 00:21:13.008 "sha384", 00:21:13.008 "sha512" 00:21:13.008 ], 00:21:13.008 "dhchap_dhgroups": [ 00:21:13.008 "null", 00:21:13.008 "ffdhe2048", 00:21:13.008 "ffdhe3072", 00:21:13.009 "ffdhe4096", 00:21:13.009 "ffdhe6144", 00:21:13.009 "ffdhe8192" 00:21:13.009 ] 00:21:13.009 } 00:21:13.009 }, 00:21:13.009 { 00:21:13.009 "method": "nvmf_set_max_subsystems", 00:21:13.009 "params": { 00:21:13.009 "max_subsystems": 1024 00:21:13.009 } 00:21:13.009 }, 00:21:13.009 { 00:21:13.009 "method": "nvmf_set_crdt", 00:21:13.009 "params": { 00:21:13.009 "crdt1": 0, 00:21:13.009 "crdt2": 0, 00:21:13.009 "crdt3": 0 00:21:13.009 } 00:21:13.009 }, 00:21:13.009 { 00:21:13.009 "method": "nvmf_create_transport", 00:21:13.009 "params": { 00:21:13.009 "trtype": "TCP", 00:21:13.009 "max_queue_depth": 128, 00:21:13.009 "max_io_qpairs_per_ctrlr": 127, 00:21:13.009 "in_capsule_data_size": 4096, 00:21:13.009 "max_io_size": 131072, 00:21:13.009 "io_unit_size": 131072, 00:21:13.009 "max_aq_depth": 128, 00:21:13.009 "num_shared_buffers": 511, 00:21:13.009 "buf_cache_size": 4294967295, 00:21:13.009 "dif_insert_or_strip": false, 00:21:13.009 "zcopy": false, 00:21:13.009 "c2h_success": false, 00:21:13.009 "sock_priority": 0, 00:21:13.009 "abort_timeout_sec": 1, 00:21:13.009 "ack_timeout": 0, 00:21:13.009 "data_wr_pool_size": 0 00:21:13.009 } 00:21:13.009 }, 00:21:13.009 { 00:21:13.009 "method": "nvmf_create_subsystem", 00:21:13.009 "params": { 00:21:13.009 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.009 "allow_any_host": false, 00:21:13.009 "serial_number": "SPDK00000000000001", 00:21:13.009 "model_number": "SPDK bdev Controller", 00:21:13.009 "max_namespaces": 10, 00:21:13.009 "min_cntlid": 1, 00:21:13.009 "max_cntlid": 65519, 00:21:13.009 "ana_reporting": false 00:21:13.009 } 00:21:13.009 }, 00:21:13.009 { 00:21:13.009 "method": "nvmf_subsystem_add_host", 00:21:13.009 "params": { 00:21:13.009 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.009 "host": "nqn.2016-06.io.spdk:host1", 00:21:13.009 "psk": "key0" 00:21:13.009 } 00:21:13.009 }, 00:21:13.009 { 00:21:13.009 "method": "nvmf_subsystem_add_ns", 00:21:13.009 "params": { 00:21:13.009 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.009 "namespace": { 00:21:13.009 "nsid": 1, 00:21:13.009 "bdev_name": "malloc0", 00:21:13.009 "nguid": "2F922CB084AD48ECA2F829FA9BAFF454", 00:21:13.009 "uuid": "2f922cb0-84ad-48ec-a2f8-29fa9baff454", 00:21:13.009 "no_auto_visible": false 00:21:13.009 } 00:21:13.009 } 00:21:13.009 }, 00:21:13.009 { 00:21:13.009 "method": "nvmf_subsystem_add_listener", 00:21:13.009 "params": { 00:21:13.009 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.009 "listen_address": { 00:21:13.009 "trtype": "TCP", 00:21:13.009 "adrfam": "IPv4", 00:21:13.009 "traddr": "10.0.0.2", 00:21:13.009 "trsvcid": "4420" 00:21:13.009 }, 00:21:13.009 "secure_channel": true 00:21:13.009 } 00:21:13.009 } 00:21:13.009 ] 00:21:13.009 } 00:21:13.009 ] 00:21:13.009 }' 00:21:13.009 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=387065 00:21:13.009 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:13.009 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 387065 00:21:13.009 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 387065 ']' 00:21:13.009 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.009 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:13.009 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.009 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:13.009 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.009 [2024-12-05 20:41:06.313866] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:21:13.009 [2024-12-05 20:41:06.313906] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.009 [2024-12-05 20:41:06.384472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.009 [2024-12-05 20:41:06.421932] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.009 [2024-12-05 20:41:06.421964] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.009 [2024-12-05 20:41:06.421970] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.009 [2024-12-05 20:41:06.421976] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.009 [2024-12-05 20:41:06.421981] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.009 [2024-12-05 20:41:06.422535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.269 [2024-12-05 20:41:06.635230] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.269 [2024-12-05 20:41:06.667263] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:13.269 [2024-12-05 20:41:06.667477] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.838 20:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:13.838 20:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:13.838 20:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:13.838 20:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:13.838 20:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.838 20:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.838 20:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=387339 00:21:13.838 20:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 387339 /var/tmp/bdevperf.sock 00:21:13.838 20:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 387339 ']' 00:21:13.838 20:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:13.838 20:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:13.838 20:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:13.838 20:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:13.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:13.838 20:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:21:13.838 "subsystems": [ 00:21:13.838 { 00:21:13.838 "subsystem": "keyring", 00:21:13.838 "config": [ 00:21:13.838 { 00:21:13.838 "method": "keyring_file_add_key", 00:21:13.838 "params": { 00:21:13.838 "name": "key0", 00:21:13.838 "path": "/tmp/tmp.Z0E6smaUsB" 00:21:13.838 } 00:21:13.838 } 00:21:13.838 ] 00:21:13.838 }, 00:21:13.838 { 00:21:13.838 "subsystem": "iobuf", 00:21:13.838 "config": [ 00:21:13.838 { 00:21:13.838 "method": "iobuf_set_options", 00:21:13.838 "params": { 00:21:13.838 "small_pool_count": 8192, 00:21:13.838 "large_pool_count": 1024, 00:21:13.838 "small_bufsize": 8192, 00:21:13.838 "large_bufsize": 135168, 00:21:13.838 "enable_numa": false 00:21:13.838 } 00:21:13.838 } 00:21:13.838 ] 00:21:13.838 }, 00:21:13.838 { 00:21:13.838 "subsystem": "sock", 00:21:13.838 "config": [ 00:21:13.838 { 00:21:13.838 "method": "sock_set_default_impl", 00:21:13.838 "params": { 00:21:13.838 "impl_name": "posix" 00:21:13.838 } 00:21:13.838 }, 00:21:13.838 { 00:21:13.838 "method": "sock_impl_set_options", 00:21:13.838 "params": { 00:21:13.838 "impl_name": "ssl", 00:21:13.838 "recv_buf_size": 4096, 00:21:13.838 "send_buf_size": 4096, 00:21:13.838 "enable_recv_pipe": true, 00:21:13.838 "enable_quickack": false, 00:21:13.838 "enable_placement_id": 0, 00:21:13.838 "enable_zerocopy_send_server": true, 00:21:13.838 "enable_zerocopy_send_client": false, 00:21:13.838 "zerocopy_threshold": 0, 00:21:13.838 "tls_version": 0, 00:21:13.838 "enable_ktls": false 00:21:13.838 } 00:21:13.838 }, 00:21:13.838 { 00:21:13.838 "method": "sock_impl_set_options", 00:21:13.838 "params": { 00:21:13.838 "impl_name": "posix", 00:21:13.838 "recv_buf_size": 2097152, 00:21:13.838 "send_buf_size": 2097152, 00:21:13.838 "enable_recv_pipe": true, 00:21:13.838 "enable_quickack": false, 00:21:13.838 "enable_placement_id": 0, 00:21:13.838 "enable_zerocopy_send_server": true, 00:21:13.838 "enable_zerocopy_send_client": false, 00:21:13.838 "zerocopy_threshold": 0, 00:21:13.838 "tls_version": 0, 00:21:13.838 "enable_ktls": false 00:21:13.838 } 00:21:13.838 } 00:21:13.838 ] 00:21:13.838 }, 00:21:13.838 { 00:21:13.838 "subsystem": "vmd", 00:21:13.838 "config": [] 00:21:13.838 }, 00:21:13.838 { 00:21:13.838 "subsystem": "accel", 00:21:13.838 "config": [ 00:21:13.839 { 00:21:13.839 "method": "accel_set_options", 00:21:13.839 "params": { 00:21:13.839 "small_cache_size": 128, 00:21:13.839 "large_cache_size": 16, 00:21:13.839 "task_count": 2048, 00:21:13.839 "sequence_count": 2048, 00:21:13.839 "buf_count": 2048 00:21:13.839 } 00:21:13.839 } 00:21:13.839 ] 00:21:13.839 }, 00:21:13.839 { 00:21:13.839 "subsystem": "bdev", 00:21:13.839 "config": [ 00:21:13.839 { 00:21:13.839 "method": "bdev_set_options", 00:21:13.839 "params": { 00:21:13.839 "bdev_io_pool_size": 65535, 00:21:13.839 "bdev_io_cache_size": 256, 00:21:13.839 "bdev_auto_examine": true, 00:21:13.839 "iobuf_small_cache_size": 128, 00:21:13.839 "iobuf_large_cache_size": 16 00:21:13.839 } 00:21:13.839 }, 00:21:13.839 { 00:21:13.839 "method": "bdev_raid_set_options", 00:21:13.839 "params": { 00:21:13.839 "process_window_size_kb": 1024, 00:21:13.839 "process_max_bandwidth_mb_sec": 0 00:21:13.839 } 00:21:13.839 }, 00:21:13.839 { 00:21:13.839 "method": "bdev_iscsi_set_options", 00:21:13.839 "params": { 00:21:13.839 "timeout_sec": 30 00:21:13.839 } 00:21:13.839 }, 00:21:13.839 { 00:21:13.839 "method": "bdev_nvme_set_options", 00:21:13.839 "params": { 00:21:13.839 "action_on_timeout": "none", 00:21:13.839 "timeout_us": 0, 00:21:13.839 "timeout_admin_us": 0, 00:21:13.839 "keep_alive_timeout_ms": 10000, 00:21:13.839 "arbitration_burst": 0, 00:21:13.839 "low_priority_weight": 0, 00:21:13.839 "medium_priority_weight": 0, 00:21:13.839 "high_priority_weight": 0, 00:21:13.839 "nvme_adminq_poll_period_us": 10000, 00:21:13.839 "nvme_ioq_poll_period_us": 0, 00:21:13.839 "io_queue_requests": 512, 00:21:13.839 "delay_cmd_submit": true, 00:21:13.839 "transport_retry_count": 4, 00:21:13.839 "bdev_retry_count": 3, 00:21:13.839 "transport_ack_timeout": 0, 00:21:13.839 "ctrlr_loss_timeout_sec": 0, 00:21:13.839 "reconnect_delay_sec": 0, 00:21:13.839 "fast_io_fail_timeout_sec": 0, 00:21:13.839 "disable_auto_failback": false, 00:21:13.839 "generate_uuids": false, 00:21:13.839 "transport_tos": 0, 00:21:13.839 "nvme_error_stat": false, 00:21:13.839 "rdma_srq_size": 0, 00:21:13.839 "io_path_stat": false, 00:21:13.839 "allow_accel_sequence": false, 00:21:13.839 "rdma_max_cq_size": 0, 00:21:13.839 "rdma_cm_event_timeout_ms": 0, 00:21:13.839 "dhchap_digests": [ 00:21:13.839 "sha256", 00:21:13.839 "sha384", 00:21:13.839 "sha512" 00:21:13.839 ], 00:21:13.839 "dhchap_dhgroups": [ 00:21:13.839 "null", 00:21:13.839 "ffdhe2048", 00:21:13.839 "ffdhe3072", 00:21:13.839 "ffdhe4096", 00:21:13.839 "ffdhe6144", 00:21:13.839 "ffdhe8192" 00:21:13.839 ] 00:21:13.839 } 00:21:13.839 }, 00:21:13.839 { 00:21:13.839 "method": "bdev_nvme_attach_controller", 00:21:13.839 "params": { 00:21:13.839 "name": "TLSTEST", 00:21:13.839 "trtype": "TCP", 00:21:13.839 "adrfam": "IPv4", 00:21:13.839 "traddr": "10.0.0.2", 00:21:13.839 "trsvcid": "4420", 00:21:13.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.839 "prchk_reftag": false, 00:21:13.839 "prchk_guard": false, 00:21:13.839 "ctrlr_loss_timeout_sec": 0, 00:21:13.839 "reconnect_delay_sec": 0, 00:21:13.839 "fast_io_fail_timeout_sec": 0, 00:21:13.839 "psk": "key0", 00:21:13.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:13.839 "hdgst": false, 00:21:13.839 "ddgst": false, 00:21:13.839 "multipath": "multipath" 00:21:13.839 } 00:21:13.839 }, 00:21:13.839 { 00:21:13.839 "method": "bdev_nvme_set_hotplug", 00:21:13.839 "params": { 00:21:13.839 "period_us": 100000, 00:21:13.839 "enable": false 00:21:13.839 } 00:21:13.839 }, 00:21:13.839 { 00:21:13.839 "method": "bdev_wait_for_examine" 00:21:13.839 } 00:21:13.839 ] 00:21:13.839 }, 00:21:13.839 { 00:21:13.839 "subsystem": "nbd", 00:21:13.839 "config": [] 00:21:13.839 } 00:21:13.839 ] 00:21:13.839 }' 00:21:13.839 20:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:13.839 20:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.839 [2024-12-05 20:41:07.208272] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:21:13.839 [2024-12-05 20:41:07.208316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid387339 ] 00:21:14.100 [2024-12-05 20:41:07.280227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.100 [2024-12-05 20:41:07.317766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:14.100 [2024-12-05 20:41:07.469698] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:14.668 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:14.668 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:14.668 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:14.928 Running I/O for 10 seconds... 00:21:16.806 5505.00 IOPS, 21.50 MiB/s [2024-12-05T19:41:11.186Z] 5557.50 IOPS, 21.71 MiB/s [2024-12-05T19:41:12.122Z] 5640.33 IOPS, 22.03 MiB/s [2024-12-05T19:41:13.499Z] 5661.50 IOPS, 22.12 MiB/s [2024-12-05T19:41:14.436Z] 5651.40 IOPS, 22.08 MiB/s [2024-12-05T19:41:15.371Z] 5697.50 IOPS, 22.26 MiB/s [2024-12-05T19:41:16.305Z] 5660.57 IOPS, 22.11 MiB/s [2024-12-05T19:41:17.244Z] 5688.62 IOPS, 22.22 MiB/s [2024-12-05T19:41:18.183Z] 5593.33 IOPS, 21.85 MiB/s [2024-12-05T19:41:18.183Z] 5509.10 IOPS, 21.52 MiB/s 00:21:24.742 Latency(us) 00:21:24.742 [2024-12-05T19:41:18.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.742 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:24.742 Verification LBA range: start 0x0 length 0x2000 00:21:24.742 TLSTESTn1 : 10.02 5512.57 21.53 0.00 0.00 23185.73 4557.73 33602.09 00:21:24.742 [2024-12-05T19:41:18.183Z] =================================================================================================================== 00:21:24.742 [2024-12-05T19:41:18.183Z] Total : 5512.57 21.53 0.00 0.00 23185.73 4557.73 33602.09 00:21:24.742 { 00:21:24.742 "results": [ 00:21:24.742 { 00:21:24.742 "job": "TLSTESTn1", 00:21:24.742 "core_mask": "0x4", 00:21:24.742 "workload": "verify", 00:21:24.742 "status": "finished", 00:21:24.742 "verify_range": { 00:21:24.742 "start": 0, 00:21:24.742 "length": 8192 00:21:24.742 }, 00:21:24.742 "queue_depth": 128, 00:21:24.742 "io_size": 4096, 00:21:24.742 "runtime": 10.016571, 00:21:24.742 "iops": 5512.565128325851, 00:21:24.742 "mibps": 21.533457532522856, 00:21:24.742 "io_failed": 0, 00:21:24.742 "io_timeout": 0, 00:21:24.742 "avg_latency_us": 23185.731738611463, 00:21:24.742 "min_latency_us": 4557.730909090909, 00:21:24.742 "max_latency_us": 33602.09454545454 00:21:24.742 } 00:21:24.742 ], 00:21:24.742 "core_count": 1 00:21:24.742 } 00:21:24.742 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:24.743 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 387339 00:21:24.743 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 387339 ']' 00:21:24.743 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 387339 00:21:24.743 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:25.002 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:25.002 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 387339 00:21:25.002 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:25.002 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:25.002 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 387339' 00:21:25.002 killing process with pid 387339 00:21:25.002 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 387339 00:21:25.002 Received shutdown signal, test time was about 10.000000 seconds 00:21:25.002 00:21:25.002 Latency(us) 00:21:25.002 [2024-12-05T19:41:18.443Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.002 [2024-12-05T19:41:18.443Z] =================================================================================================================== 00:21:25.002 [2024-12-05T19:41:18.443Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:25.002 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 387339 00:21:25.002 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 387065 00:21:25.002 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 387065 ']' 00:21:25.002 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 387065 00:21:25.002 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:25.002 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:25.002 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 387065 00:21:25.263 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:25.263 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:25.263 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 387065' 00:21:25.263 killing process with pid 387065 00:21:25.263 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 387065 00:21:25.263 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 387065 00:21:25.263 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:21:25.263 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:25.263 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:25.263 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.263 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=389372 00:21:25.263 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 389372 00:21:25.263 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:25.263 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 389372 ']' 00:21:25.263 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.263 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:25.263 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.263 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:25.263 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.263 [2024-12-05 20:41:18.676407] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:21:25.263 [2024-12-05 20:41:18.676448] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.523 [2024-12-05 20:41:18.753409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.524 [2024-12-05 20:41:18.790224] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:25.524 [2024-12-05 20:41:18.790254] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:25.524 [2024-12-05 20:41:18.790261] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:25.524 [2024-12-05 20:41:18.790267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:25.524 [2024-12-05 20:41:18.790271] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:25.524 [2024-12-05 20:41:18.790822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.093 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:26.093 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:26.093 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:26.093 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:26.093 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.093 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.093 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.Z0E6smaUsB 00:21:26.093 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Z0E6smaUsB 00:21:26.093 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:26.353 [2024-12-05 20:41:19.672225] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:26.353 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:26.613 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:26.873 [2024-12-05 20:41:20.057213] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:26.873 [2024-12-05 20:41:20.057409] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.873 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:26.873 malloc0 00:21:26.873 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:27.135 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Z0E6smaUsB 00:21:27.396 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:27.396 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:27.396 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=389737 00:21:27.396 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:27.396 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 389737 /var/tmp/bdevperf.sock 00:21:27.396 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 389737 ']' 00:21:27.396 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:27.396 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:27.396 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:27.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:27.396 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:27.396 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:27.655 [2024-12-05 20:41:20.863369] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:21:27.656 [2024-12-05 20:41:20.863415] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid389737 ] 00:21:27.656 [2024-12-05 20:41:20.936513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.656 [2024-12-05 20:41:20.976574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.592 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:28.592 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:28.592 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Z0E6smaUsB 00:21:28.592 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:28.592 [2024-12-05 20:41:21.993153] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:28.850 nvme0n1 00:21:28.850 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:28.850 Running I/O for 1 seconds... 00:21:29.786 5447.00 IOPS, 21.28 MiB/s 00:21:29.787 Latency(us) 00:21:29.787 [2024-12-05T19:41:23.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.787 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:29.787 Verification LBA range: start 0x0 length 0x2000 00:21:29.787 nvme0n1 : 1.02 5445.81 21.27 0.00 0.00 23291.27 5719.51 63391.19 00:21:29.787 [2024-12-05T19:41:23.228Z] =================================================================================================================== 00:21:29.787 [2024-12-05T19:41:23.228Z] Total : 5445.81 21.27 0.00 0.00 23291.27 5719.51 63391.19 00:21:29.787 { 00:21:29.787 "results": [ 00:21:29.787 { 00:21:29.787 "job": "nvme0n1", 00:21:29.787 "core_mask": "0x2", 00:21:29.787 "workload": "verify", 00:21:29.787 "status": "finished", 00:21:29.787 "verify_range": { 00:21:29.787 "start": 0, 00:21:29.787 "length": 8192 00:21:29.787 }, 00:21:29.787 "queue_depth": 128, 00:21:29.787 "io_size": 4096, 00:21:29.787 "runtime": 1.023907, 00:21:29.787 "iops": 5445.807089901719, 00:21:29.787 "mibps": 21.27268394492859, 00:21:29.787 "io_failed": 0, 00:21:29.787 "io_timeout": 0, 00:21:29.787 "avg_latency_us": 23291.26531889918, 00:21:29.787 "min_latency_us": 5719.505454545455, 00:21:29.787 "max_latency_us": 63391.185454545455 00:21:29.787 } 00:21:29.787 ], 00:21:29.787 "core_count": 1 00:21:29.787 } 00:21:29.787 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 389737 00:21:29.787 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 389737 ']' 00:21:29.787 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 389737 00:21:29.787 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:29.787 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:29.787 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 389737 00:21:30.046 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:30.046 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:30.047 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 389737' 00:21:30.047 killing process with pid 389737 00:21:30.047 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 389737 00:21:30.047 Received shutdown signal, test time was about 1.000000 seconds 00:21:30.047 00:21:30.047 Latency(us) 00:21:30.047 [2024-12-05T19:41:23.488Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.047 [2024-12-05T19:41:23.488Z] =================================================================================================================== 00:21:30.047 [2024-12-05T19:41:23.488Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:30.047 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 389737 00:21:30.047 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 389372 00:21:30.047 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 389372 ']' 00:21:30.047 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 389372 00:21:30.047 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:30.047 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:30.047 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 389372 00:21:30.047 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:30.047 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:30.047 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 389372' 00:21:30.047 killing process with pid 389372 00:21:30.047 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 389372 00:21:30.047 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 389372 00:21:30.306 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:21:30.306 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:30.306 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:30.306 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.306 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=390276 00:21:30.306 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 390276 00:21:30.306 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:30.306 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 390276 ']' 00:21:30.306 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.306 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:30.306 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.306 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:30.306 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.306 [2024-12-05 20:41:23.705490] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:21:30.306 [2024-12-05 20:41:23.705533] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:30.566 [2024-12-05 20:41:23.781567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.566 [2024-12-05 20:41:23.813855] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:30.566 [2024-12-05 20:41:23.813890] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:30.566 [2024-12-05 20:41:23.813896] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:30.566 [2024-12-05 20:41:23.813901] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:30.566 [2024-12-05 20:41:23.813905] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:30.566 [2024-12-05 20:41:23.814474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.135 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:31.135 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:31.135 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:31.135 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:31.135 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.135 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:31.135 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:21:31.135 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.135 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.136 [2024-12-05 20:41:24.550904] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:31.136 malloc0 00:21:31.396 [2024-12-05 20:41:24.578894] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:31.396 [2024-12-05 20:41:24.579098] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.396 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.396 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=390536 00:21:31.396 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:31.396 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 390536 /var/tmp/bdevperf.sock 00:21:31.396 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 390536 ']' 00:21:31.396 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:31.396 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:31.396 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:31.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:31.396 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:31.396 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.396 [2024-12-05 20:41:24.655444] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:21:31.396 [2024-12-05 20:41:24.655485] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid390536 ] 00:21:31.396 [2024-12-05 20:41:24.729275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.396 [2024-12-05 20:41:24.768576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:31.656 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:31.656 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:31.656 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Z0E6smaUsB 00:21:31.656 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:31.915 [2024-12-05 20:41:25.191349] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:31.915 nvme0n1 00:21:31.915 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:32.174 Running I/O for 1 seconds... 00:21:33.113 5075.00 IOPS, 19.82 MiB/s 00:21:33.113 Latency(us) 00:21:33.113 [2024-12-05T19:41:26.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.113 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:33.113 Verification LBA range: start 0x0 length 0x2000 00:21:33.113 nvme0n1 : 1.01 5140.96 20.08 0.00 0.00 24739.72 5093.93 61961.31 00:21:33.113 [2024-12-05T19:41:26.554Z] =================================================================================================================== 00:21:33.113 [2024-12-05T19:41:26.554Z] Total : 5140.96 20.08 0.00 0.00 24739.72 5093.93 61961.31 00:21:33.113 { 00:21:33.113 "results": [ 00:21:33.113 { 00:21:33.113 "job": "nvme0n1", 00:21:33.113 "core_mask": "0x2", 00:21:33.113 "workload": "verify", 00:21:33.113 "status": "finished", 00:21:33.113 "verify_range": { 00:21:33.113 "start": 0, 00:21:33.113 "length": 8192 00:21:33.113 }, 00:21:33.113 "queue_depth": 128, 00:21:33.113 "io_size": 4096, 00:21:33.113 "runtime": 1.012262, 00:21:33.113 "iops": 5140.961529722542, 00:21:33.113 "mibps": 20.08188097547868, 00:21:33.113 "io_failed": 0, 00:21:33.113 "io_timeout": 0, 00:21:33.113 "avg_latency_us": 24739.72265250507, 00:21:33.113 "min_latency_us": 5093.9345454545455, 00:21:33.113 "max_latency_us": 61961.30909090909 00:21:33.113 } 00:21:33.113 ], 00:21:33.113 "core_count": 1 00:21:33.113 } 00:21:33.113 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:21:33.113 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.113 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:33.113 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.113 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:21:33.113 "subsystems": [ 00:21:33.113 { 00:21:33.113 "subsystem": "keyring", 00:21:33.113 "config": [ 00:21:33.113 { 00:21:33.113 "method": "keyring_file_add_key", 00:21:33.113 "params": { 00:21:33.113 "name": "key0", 00:21:33.113 "path": "/tmp/tmp.Z0E6smaUsB" 00:21:33.113 } 00:21:33.113 } 00:21:33.113 ] 00:21:33.113 }, 00:21:33.113 { 00:21:33.113 "subsystem": "iobuf", 00:21:33.113 "config": [ 00:21:33.113 { 00:21:33.113 "method": "iobuf_set_options", 00:21:33.113 "params": { 00:21:33.113 "small_pool_count": 8192, 00:21:33.113 "large_pool_count": 1024, 00:21:33.113 "small_bufsize": 8192, 00:21:33.113 "large_bufsize": 135168, 00:21:33.113 "enable_numa": false 00:21:33.113 } 00:21:33.113 } 00:21:33.113 ] 00:21:33.113 }, 00:21:33.113 { 00:21:33.113 "subsystem": "sock", 00:21:33.113 "config": [ 00:21:33.113 { 00:21:33.113 "method": "sock_set_default_impl", 00:21:33.113 "params": { 00:21:33.113 "impl_name": "posix" 00:21:33.113 } 00:21:33.113 }, 00:21:33.113 { 00:21:33.113 "method": "sock_impl_set_options", 00:21:33.113 "params": { 00:21:33.113 "impl_name": "ssl", 00:21:33.113 "recv_buf_size": 4096, 00:21:33.113 "send_buf_size": 4096, 00:21:33.113 "enable_recv_pipe": true, 00:21:33.113 "enable_quickack": false, 00:21:33.113 "enable_placement_id": 0, 00:21:33.113 "enable_zerocopy_send_server": true, 00:21:33.113 "enable_zerocopy_send_client": false, 00:21:33.113 "zerocopy_threshold": 0, 00:21:33.113 "tls_version": 0, 00:21:33.113 "enable_ktls": false 00:21:33.113 } 00:21:33.113 }, 00:21:33.113 { 00:21:33.113 "method": "sock_impl_set_options", 00:21:33.113 "params": { 00:21:33.113 "impl_name": "posix", 00:21:33.113 "recv_buf_size": 2097152, 00:21:33.113 "send_buf_size": 2097152, 00:21:33.113 "enable_recv_pipe": true, 00:21:33.113 "enable_quickack": false, 00:21:33.113 "enable_placement_id": 0, 00:21:33.113 "enable_zerocopy_send_server": true, 00:21:33.113 "enable_zerocopy_send_client": false, 00:21:33.113 "zerocopy_threshold": 0, 00:21:33.113 "tls_version": 0, 00:21:33.113 "enable_ktls": false 00:21:33.113 } 00:21:33.113 } 00:21:33.113 ] 00:21:33.113 }, 00:21:33.113 { 00:21:33.113 "subsystem": "vmd", 00:21:33.113 "config": [] 00:21:33.113 }, 00:21:33.113 { 00:21:33.113 "subsystem": "accel", 00:21:33.113 "config": [ 00:21:33.113 { 00:21:33.113 "method": "accel_set_options", 00:21:33.113 "params": { 00:21:33.113 "small_cache_size": 128, 00:21:33.113 "large_cache_size": 16, 00:21:33.113 "task_count": 2048, 00:21:33.113 "sequence_count": 2048, 00:21:33.113 "buf_count": 2048 00:21:33.113 } 00:21:33.113 } 00:21:33.113 ] 00:21:33.113 }, 00:21:33.113 { 00:21:33.113 "subsystem": "bdev", 00:21:33.113 "config": [ 00:21:33.113 { 00:21:33.113 "method": "bdev_set_options", 00:21:33.113 "params": { 00:21:33.113 "bdev_io_pool_size": 65535, 00:21:33.113 "bdev_io_cache_size": 256, 00:21:33.114 "bdev_auto_examine": true, 00:21:33.114 "iobuf_small_cache_size": 128, 00:21:33.114 "iobuf_large_cache_size": 16 00:21:33.114 } 00:21:33.114 }, 00:21:33.114 { 00:21:33.114 "method": "bdev_raid_set_options", 00:21:33.114 "params": { 00:21:33.114 "process_window_size_kb": 1024, 00:21:33.114 "process_max_bandwidth_mb_sec": 0 00:21:33.114 } 00:21:33.114 }, 00:21:33.114 { 00:21:33.114 "method": "bdev_iscsi_set_options", 00:21:33.114 "params": { 00:21:33.114 "timeout_sec": 30 00:21:33.114 } 00:21:33.114 }, 00:21:33.114 { 00:21:33.114 "method": "bdev_nvme_set_options", 00:21:33.114 "params": { 00:21:33.114 "action_on_timeout": "none", 00:21:33.114 "timeout_us": 0, 00:21:33.114 "timeout_admin_us": 0, 00:21:33.114 "keep_alive_timeout_ms": 10000, 00:21:33.114 "arbitration_burst": 0, 00:21:33.114 "low_priority_weight": 0, 00:21:33.114 "medium_priority_weight": 0, 00:21:33.114 "high_priority_weight": 0, 00:21:33.114 "nvme_adminq_poll_period_us": 10000, 00:21:33.114 "nvme_ioq_poll_period_us": 0, 00:21:33.114 "io_queue_requests": 0, 00:21:33.114 "delay_cmd_submit": true, 00:21:33.114 "transport_retry_count": 4, 00:21:33.114 "bdev_retry_count": 3, 00:21:33.114 "transport_ack_timeout": 0, 00:21:33.114 "ctrlr_loss_timeout_sec": 0, 00:21:33.114 "reconnect_delay_sec": 0, 00:21:33.114 "fast_io_fail_timeout_sec": 0, 00:21:33.114 "disable_auto_failback": false, 00:21:33.114 "generate_uuids": false, 00:21:33.114 "transport_tos": 0, 00:21:33.114 "nvme_error_stat": false, 00:21:33.114 "rdma_srq_size": 0, 00:21:33.114 "io_path_stat": false, 00:21:33.114 "allow_accel_sequence": false, 00:21:33.114 "rdma_max_cq_size": 0, 00:21:33.114 "rdma_cm_event_timeout_ms": 0, 00:21:33.114 "dhchap_digests": [ 00:21:33.114 "sha256", 00:21:33.114 "sha384", 00:21:33.114 "sha512" 00:21:33.114 ], 00:21:33.114 "dhchap_dhgroups": [ 00:21:33.114 "null", 00:21:33.114 "ffdhe2048", 00:21:33.114 "ffdhe3072", 00:21:33.114 "ffdhe4096", 00:21:33.114 "ffdhe6144", 00:21:33.114 "ffdhe8192" 00:21:33.114 ] 00:21:33.114 } 00:21:33.114 }, 00:21:33.114 { 00:21:33.114 "method": "bdev_nvme_set_hotplug", 00:21:33.114 "params": { 00:21:33.114 "period_us": 100000, 00:21:33.114 "enable": false 00:21:33.114 } 00:21:33.114 }, 00:21:33.114 { 00:21:33.114 "method": "bdev_malloc_create", 00:21:33.114 "params": { 00:21:33.114 "name": "malloc0", 00:21:33.114 "num_blocks": 8192, 00:21:33.114 "block_size": 4096, 00:21:33.114 "physical_block_size": 4096, 00:21:33.114 "uuid": "661e48e1-6fae-4e91-9719-da4a9f297c84", 00:21:33.114 "optimal_io_boundary": 0, 00:21:33.114 "md_size": 0, 00:21:33.114 "dif_type": 0, 00:21:33.114 "dif_is_head_of_md": false, 00:21:33.114 "dif_pi_format": 0 00:21:33.114 } 00:21:33.114 }, 00:21:33.114 { 00:21:33.114 "method": "bdev_wait_for_examine" 00:21:33.114 } 00:21:33.114 ] 00:21:33.114 }, 00:21:33.114 { 00:21:33.114 "subsystem": "nbd", 00:21:33.114 "config": [] 00:21:33.114 }, 00:21:33.114 { 00:21:33.114 "subsystem": "scheduler", 00:21:33.114 "config": [ 00:21:33.114 { 00:21:33.114 "method": "framework_set_scheduler", 00:21:33.114 "params": { 00:21:33.114 "name": "static" 00:21:33.114 } 00:21:33.114 } 00:21:33.114 ] 00:21:33.114 }, 00:21:33.114 { 00:21:33.114 "subsystem": "nvmf", 00:21:33.114 "config": [ 00:21:33.114 { 00:21:33.114 "method": "nvmf_set_config", 00:21:33.114 "params": { 00:21:33.114 "discovery_filter": "match_any", 00:21:33.114 "admin_cmd_passthru": { 00:21:33.114 "identify_ctrlr": false 00:21:33.114 }, 00:21:33.114 "dhchap_digests": [ 00:21:33.114 "sha256", 00:21:33.114 "sha384", 00:21:33.114 "sha512" 00:21:33.114 ], 00:21:33.114 "dhchap_dhgroups": [ 00:21:33.114 "null", 00:21:33.114 "ffdhe2048", 00:21:33.114 "ffdhe3072", 00:21:33.114 "ffdhe4096", 00:21:33.114 "ffdhe6144", 00:21:33.114 "ffdhe8192" 00:21:33.114 ] 00:21:33.114 } 00:21:33.114 }, 00:21:33.114 { 00:21:33.114 "method": "nvmf_set_max_subsystems", 00:21:33.114 "params": { 00:21:33.114 "max_subsystems": 1024 00:21:33.114 } 00:21:33.114 }, 00:21:33.114 { 00:21:33.114 "method": "nvmf_set_crdt", 00:21:33.114 "params": { 00:21:33.114 "crdt1": 0, 00:21:33.114 "crdt2": 0, 00:21:33.114 "crdt3": 0 00:21:33.114 } 00:21:33.114 }, 00:21:33.114 { 00:21:33.114 "method": "nvmf_create_transport", 00:21:33.114 "params": { 00:21:33.114 "trtype": "TCP", 00:21:33.114 "max_queue_depth": 128, 00:21:33.114 "max_io_qpairs_per_ctrlr": 127, 00:21:33.114 "in_capsule_data_size": 4096, 00:21:33.114 "max_io_size": 131072, 00:21:33.114 "io_unit_size": 131072, 00:21:33.114 "max_aq_depth": 128, 00:21:33.114 "num_shared_buffers": 511, 00:21:33.114 "buf_cache_size": 4294967295, 00:21:33.114 "dif_insert_or_strip": false, 00:21:33.114 "zcopy": false, 00:21:33.114 "c2h_success": false, 00:21:33.114 "sock_priority": 0, 00:21:33.114 "abort_timeout_sec": 1, 00:21:33.114 "ack_timeout": 0, 00:21:33.114 "data_wr_pool_size": 0 00:21:33.114 } 00:21:33.114 }, 00:21:33.114 { 00:21:33.114 "method": "nvmf_create_subsystem", 00:21:33.114 "params": { 00:21:33.114 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.114 "allow_any_host": false, 00:21:33.114 "serial_number": "00000000000000000000", 00:21:33.114 "model_number": "SPDK bdev Controller", 00:21:33.114 "max_namespaces": 32, 00:21:33.114 "min_cntlid": 1, 00:21:33.114 "max_cntlid": 65519, 00:21:33.114 "ana_reporting": false 00:21:33.114 } 00:21:33.114 }, 00:21:33.114 { 00:21:33.114 "method": "nvmf_subsystem_add_host", 00:21:33.114 "params": { 00:21:33.114 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.114 "host": "nqn.2016-06.io.spdk:host1", 00:21:33.114 "psk": "key0" 00:21:33.114 } 00:21:33.114 }, 00:21:33.114 { 00:21:33.114 "method": "nvmf_subsystem_add_ns", 00:21:33.114 "params": { 00:21:33.114 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.114 "namespace": { 00:21:33.114 "nsid": 1, 00:21:33.114 "bdev_name": "malloc0", 00:21:33.114 "nguid": "661E48E16FAE4E919719DA4A9F297C84", 00:21:33.114 "uuid": "661e48e1-6fae-4e91-9719-da4a9f297c84", 00:21:33.114 "no_auto_visible": false 00:21:33.114 } 00:21:33.114 } 00:21:33.114 }, 00:21:33.114 { 00:21:33.114 "method": "nvmf_subsystem_add_listener", 00:21:33.114 "params": { 00:21:33.114 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.114 "listen_address": { 00:21:33.114 "trtype": "TCP", 00:21:33.114 "adrfam": "IPv4", 00:21:33.114 "traddr": "10.0.0.2", 00:21:33.114 "trsvcid": "4420" 00:21:33.114 }, 00:21:33.114 "secure_channel": false, 00:21:33.114 "sock_impl": "ssl" 00:21:33.114 } 00:21:33.114 } 00:21:33.114 ] 00:21:33.114 } 00:21:33.114 ] 00:21:33.114 }' 00:21:33.114 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:33.375 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:21:33.375 "subsystems": [ 00:21:33.375 { 00:21:33.375 "subsystem": "keyring", 00:21:33.375 "config": [ 00:21:33.375 { 00:21:33.375 "method": "keyring_file_add_key", 00:21:33.375 "params": { 00:21:33.375 "name": "key0", 00:21:33.375 "path": "/tmp/tmp.Z0E6smaUsB" 00:21:33.375 } 00:21:33.375 } 00:21:33.375 ] 00:21:33.375 }, 00:21:33.375 { 00:21:33.375 "subsystem": "iobuf", 00:21:33.375 "config": [ 00:21:33.375 { 00:21:33.375 "method": "iobuf_set_options", 00:21:33.375 "params": { 00:21:33.375 "small_pool_count": 8192, 00:21:33.375 "large_pool_count": 1024, 00:21:33.375 "small_bufsize": 8192, 00:21:33.375 "large_bufsize": 135168, 00:21:33.375 "enable_numa": false 00:21:33.375 } 00:21:33.375 } 00:21:33.375 ] 00:21:33.375 }, 00:21:33.375 { 00:21:33.375 "subsystem": "sock", 00:21:33.375 "config": [ 00:21:33.375 { 00:21:33.375 "method": "sock_set_default_impl", 00:21:33.375 "params": { 00:21:33.375 "impl_name": "posix" 00:21:33.375 } 00:21:33.375 }, 00:21:33.375 { 00:21:33.375 "method": "sock_impl_set_options", 00:21:33.375 "params": { 00:21:33.375 "impl_name": "ssl", 00:21:33.375 "recv_buf_size": 4096, 00:21:33.375 "send_buf_size": 4096, 00:21:33.375 "enable_recv_pipe": true, 00:21:33.375 "enable_quickack": false, 00:21:33.375 "enable_placement_id": 0, 00:21:33.375 "enable_zerocopy_send_server": true, 00:21:33.375 "enable_zerocopy_send_client": false, 00:21:33.375 "zerocopy_threshold": 0, 00:21:33.375 "tls_version": 0, 00:21:33.375 "enable_ktls": false 00:21:33.375 } 00:21:33.375 }, 00:21:33.375 { 00:21:33.375 "method": "sock_impl_set_options", 00:21:33.375 "params": { 00:21:33.375 "impl_name": "posix", 00:21:33.375 "recv_buf_size": 2097152, 00:21:33.375 "send_buf_size": 2097152, 00:21:33.375 "enable_recv_pipe": true, 00:21:33.375 "enable_quickack": false, 00:21:33.375 "enable_placement_id": 0, 00:21:33.375 "enable_zerocopy_send_server": true, 00:21:33.375 "enable_zerocopy_send_client": false, 00:21:33.375 "zerocopy_threshold": 0, 00:21:33.375 "tls_version": 0, 00:21:33.375 "enable_ktls": false 00:21:33.375 } 00:21:33.375 } 00:21:33.375 ] 00:21:33.375 }, 00:21:33.375 { 00:21:33.375 "subsystem": "vmd", 00:21:33.375 "config": [] 00:21:33.375 }, 00:21:33.375 { 00:21:33.376 "subsystem": "accel", 00:21:33.376 "config": [ 00:21:33.376 { 00:21:33.376 "method": "accel_set_options", 00:21:33.376 "params": { 00:21:33.376 "small_cache_size": 128, 00:21:33.376 "large_cache_size": 16, 00:21:33.376 "task_count": 2048, 00:21:33.376 "sequence_count": 2048, 00:21:33.376 "buf_count": 2048 00:21:33.376 } 00:21:33.376 } 00:21:33.376 ] 00:21:33.376 }, 00:21:33.376 { 00:21:33.376 "subsystem": "bdev", 00:21:33.376 "config": [ 00:21:33.376 { 00:21:33.376 "method": "bdev_set_options", 00:21:33.376 "params": { 00:21:33.376 "bdev_io_pool_size": 65535, 00:21:33.376 "bdev_io_cache_size": 256, 00:21:33.376 "bdev_auto_examine": true, 00:21:33.376 "iobuf_small_cache_size": 128, 00:21:33.376 "iobuf_large_cache_size": 16 00:21:33.376 } 00:21:33.376 }, 00:21:33.376 { 00:21:33.376 "method": "bdev_raid_set_options", 00:21:33.376 "params": { 00:21:33.376 "process_window_size_kb": 1024, 00:21:33.376 "process_max_bandwidth_mb_sec": 0 00:21:33.376 } 00:21:33.376 }, 00:21:33.376 { 00:21:33.376 "method": "bdev_iscsi_set_options", 00:21:33.376 "params": { 00:21:33.376 "timeout_sec": 30 00:21:33.376 } 00:21:33.376 }, 00:21:33.376 { 00:21:33.376 "method": "bdev_nvme_set_options", 00:21:33.376 "params": { 00:21:33.376 "action_on_timeout": "none", 00:21:33.376 "timeout_us": 0, 00:21:33.376 "timeout_admin_us": 0, 00:21:33.376 "keep_alive_timeout_ms": 10000, 00:21:33.376 "arbitration_burst": 0, 00:21:33.376 "low_priority_weight": 0, 00:21:33.376 "medium_priority_weight": 0, 00:21:33.376 "high_priority_weight": 0, 00:21:33.376 "nvme_adminq_poll_period_us": 10000, 00:21:33.376 "nvme_ioq_poll_period_us": 0, 00:21:33.376 "io_queue_requests": 512, 00:21:33.376 "delay_cmd_submit": true, 00:21:33.376 "transport_retry_count": 4, 00:21:33.376 "bdev_retry_count": 3, 00:21:33.376 "transport_ack_timeout": 0, 00:21:33.376 "ctrlr_loss_timeout_sec": 0, 00:21:33.376 "reconnect_delay_sec": 0, 00:21:33.376 "fast_io_fail_timeout_sec": 0, 00:21:33.376 "disable_auto_failback": false, 00:21:33.376 "generate_uuids": false, 00:21:33.376 "transport_tos": 0, 00:21:33.376 "nvme_error_stat": false, 00:21:33.376 "rdma_srq_size": 0, 00:21:33.376 "io_path_stat": false, 00:21:33.376 "allow_accel_sequence": false, 00:21:33.376 "rdma_max_cq_size": 0, 00:21:33.376 "rdma_cm_event_timeout_ms": 0, 00:21:33.376 "dhchap_digests": [ 00:21:33.376 "sha256", 00:21:33.376 "sha384", 00:21:33.376 "sha512" 00:21:33.376 ], 00:21:33.376 "dhchap_dhgroups": [ 00:21:33.376 "null", 00:21:33.376 "ffdhe2048", 00:21:33.376 "ffdhe3072", 00:21:33.376 "ffdhe4096", 00:21:33.376 "ffdhe6144", 00:21:33.376 "ffdhe8192" 00:21:33.376 ] 00:21:33.376 } 00:21:33.376 }, 00:21:33.376 { 00:21:33.376 "method": "bdev_nvme_attach_controller", 00:21:33.376 "params": { 00:21:33.376 "name": "nvme0", 00:21:33.376 "trtype": "TCP", 00:21:33.376 "adrfam": "IPv4", 00:21:33.376 "traddr": "10.0.0.2", 00:21:33.376 "trsvcid": "4420", 00:21:33.376 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.376 "prchk_reftag": false, 00:21:33.376 "prchk_guard": false, 00:21:33.376 "ctrlr_loss_timeout_sec": 0, 00:21:33.376 "reconnect_delay_sec": 0, 00:21:33.376 "fast_io_fail_timeout_sec": 0, 00:21:33.376 "psk": "key0", 00:21:33.376 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:33.376 "hdgst": false, 00:21:33.376 "ddgst": false, 00:21:33.376 "multipath": "multipath" 00:21:33.376 } 00:21:33.376 }, 00:21:33.376 { 00:21:33.376 "method": "bdev_nvme_set_hotplug", 00:21:33.376 "params": { 00:21:33.376 "period_us": 100000, 00:21:33.376 "enable": false 00:21:33.376 } 00:21:33.376 }, 00:21:33.376 { 00:21:33.376 "method": "bdev_enable_histogram", 00:21:33.376 "params": { 00:21:33.376 "name": "nvme0n1", 00:21:33.376 "enable": true 00:21:33.376 } 00:21:33.376 }, 00:21:33.376 { 00:21:33.376 "method": "bdev_wait_for_examine" 00:21:33.376 } 00:21:33.376 ] 00:21:33.376 }, 00:21:33.376 { 00:21:33.376 "subsystem": "nbd", 00:21:33.376 "config": [] 00:21:33.376 } 00:21:33.376 ] 00:21:33.376 }' 00:21:33.376 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 390536 00:21:33.376 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 390536 ']' 00:21:33.376 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 390536 00:21:33.376 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:33.376 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:33.376 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 390536 00:21:33.376 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:33.376 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:33.376 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 390536' 00:21:33.376 killing process with pid 390536 00:21:33.376 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 390536 00:21:33.376 Received shutdown signal, test time was about 1.000000 seconds 00:21:33.376 00:21:33.376 Latency(us) 00:21:33.376 [2024-12-05T19:41:26.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.376 [2024-12-05T19:41:26.817Z] =================================================================================================================== 00:21:33.376 [2024-12-05T19:41:26.817Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:33.376 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 390536 00:21:33.637 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 390276 00:21:33.637 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 390276 ']' 00:21:33.637 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 390276 00:21:33.637 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:33.637 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:33.637 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 390276 00:21:33.637 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:33.637 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:33.637 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 390276' 00:21:33.637 killing process with pid 390276 00:21:33.637 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 390276 00:21:33.637 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 390276 00:21:33.897 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:21:33.897 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:33.897 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:33.897 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:21:33.897 "subsystems": [ 00:21:33.897 { 00:21:33.897 "subsystem": "keyring", 00:21:33.897 "config": [ 00:21:33.897 { 00:21:33.897 "method": "keyring_file_add_key", 00:21:33.897 "params": { 00:21:33.897 "name": "key0", 00:21:33.897 "path": "/tmp/tmp.Z0E6smaUsB" 00:21:33.897 } 00:21:33.897 } 00:21:33.897 ] 00:21:33.897 }, 00:21:33.898 { 00:21:33.898 "subsystem": "iobuf", 00:21:33.898 "config": [ 00:21:33.898 { 00:21:33.898 "method": "iobuf_set_options", 00:21:33.898 "params": { 00:21:33.898 "small_pool_count": 8192, 00:21:33.898 "large_pool_count": 1024, 00:21:33.898 "small_bufsize": 8192, 00:21:33.898 "large_bufsize": 135168, 00:21:33.898 "enable_numa": false 00:21:33.898 } 00:21:33.898 } 00:21:33.898 ] 00:21:33.898 }, 00:21:33.898 { 00:21:33.898 "subsystem": "sock", 00:21:33.898 "config": [ 00:21:33.898 { 00:21:33.898 "method": "sock_set_default_impl", 00:21:33.898 "params": { 00:21:33.898 "impl_name": "posix" 00:21:33.898 } 00:21:33.898 }, 00:21:33.898 { 00:21:33.898 "method": "sock_impl_set_options", 00:21:33.898 "params": { 00:21:33.898 "impl_name": "ssl", 00:21:33.898 "recv_buf_size": 4096, 00:21:33.898 "send_buf_size": 4096, 00:21:33.898 "enable_recv_pipe": true, 00:21:33.898 "enable_quickack": false, 00:21:33.898 "enable_placement_id": 0, 00:21:33.898 "enable_zerocopy_send_server": true, 00:21:33.898 "enable_zerocopy_send_client": false, 00:21:33.898 "zerocopy_threshold": 0, 00:21:33.898 "tls_version": 0, 00:21:33.898 "enable_ktls": false 00:21:33.898 } 00:21:33.898 }, 00:21:33.898 { 00:21:33.898 "method": "sock_impl_set_options", 00:21:33.898 "params": { 00:21:33.898 "impl_name": "posix", 00:21:33.898 "recv_buf_size": 2097152, 00:21:33.898 "send_buf_size": 2097152, 00:21:33.898 "enable_recv_pipe": true, 00:21:33.898 "enable_quickack": false, 00:21:33.898 "enable_placement_id": 0, 00:21:33.898 "enable_zerocopy_send_server": true, 00:21:33.898 "enable_zerocopy_send_client": false, 00:21:33.898 "zerocopy_threshold": 0, 00:21:33.898 "tls_version": 0, 00:21:33.898 "enable_ktls": false 00:21:33.898 } 00:21:33.898 } 00:21:33.898 ] 00:21:33.898 }, 00:21:33.898 { 00:21:33.898 "subsystem": "vmd", 00:21:33.898 "config": [] 00:21:33.898 }, 00:21:33.898 { 00:21:33.898 "subsystem": "accel", 00:21:33.898 "config": [ 00:21:33.898 { 00:21:33.898 "method": "accel_set_options", 00:21:33.898 "params": { 00:21:33.898 "small_cache_size": 128, 00:21:33.898 "large_cache_size": 16, 00:21:33.898 "task_count": 2048, 00:21:33.898 "sequence_count": 2048, 00:21:33.898 "buf_count": 2048 00:21:33.898 } 00:21:33.898 } 00:21:33.898 ] 00:21:33.898 }, 00:21:33.898 { 00:21:33.898 "subsystem": "bdev", 00:21:33.898 "config": [ 00:21:33.898 { 00:21:33.898 "method": "bdev_set_options", 00:21:33.898 "params": { 00:21:33.898 "bdev_io_pool_size": 65535, 00:21:33.898 "bdev_io_cache_size": 256, 00:21:33.898 "bdev_auto_examine": true, 00:21:33.898 "iobuf_small_cache_size": 128, 00:21:33.898 "iobuf_large_cache_size": 16 00:21:33.898 } 00:21:33.898 }, 00:21:33.898 { 00:21:33.898 "method": "bdev_raid_set_options", 00:21:33.898 "params": { 00:21:33.898 "process_window_size_kb": 1024, 00:21:33.898 "process_max_bandwidth_mb_sec": 0 00:21:33.898 } 00:21:33.898 }, 00:21:33.898 { 00:21:33.898 "method": "bdev_iscsi_set_options", 00:21:33.898 "params": { 00:21:33.898 "timeout_sec": 30 00:21:33.898 } 00:21:33.898 }, 00:21:33.898 { 00:21:33.898 "method": "bdev_nvme_set_options", 00:21:33.898 "params": { 00:21:33.898 "action_on_timeout": "none", 00:21:33.898 "timeout_us": 0, 00:21:33.898 "timeout_admin_us": 0, 00:21:33.898 "keep_alive_timeout_ms": 10000, 00:21:33.898 "arbitration_burst": 0, 00:21:33.898 "low_priority_weight": 0, 00:21:33.898 "medium_priority_weight": 0, 00:21:33.898 "high_priority_weight": 0, 00:21:33.898 "nvme_adminq_poll_period_us": 10000, 00:21:33.898 "nvme_ioq_poll_period_us": 0, 00:21:33.898 "io_queue_requests": 0, 00:21:33.898 "delay_cmd_submit": true, 00:21:33.898 "transport_retry_count": 4, 00:21:33.898 "bdev_retry_count": 3, 00:21:33.898 "transport_ack_timeout": 0, 00:21:33.898 "ctrlr_loss_timeout_sec": 0, 00:21:33.898 "reconnect_delay_sec": 0, 00:21:33.898 "fast_io_fail_timeout_sec": 0, 00:21:33.898 "disable_auto_failback": false, 00:21:33.898 "generate_uuids": false, 00:21:33.898 "transport_tos": 0, 00:21:33.898 "nvme_error_stat": false, 00:21:33.898 "rdma_srq_size": 0, 00:21:33.898 "io_path_stat": false, 00:21:33.898 "allow_accel_sequence": false, 00:21:33.898 "rdma_max_cq_size": 0, 00:21:33.898 "rdma_cm_event_timeout_ms": 0, 00:21:33.898 "dhchap_digests": [ 00:21:33.898 "sha256", 00:21:33.898 "sha384", 00:21:33.898 "sha512" 00:21:33.898 ], 00:21:33.898 "dhchap_dhgroups": [ 00:21:33.898 "null", 00:21:33.898 "ffdhe2048", 00:21:33.898 "ffdhe3072", 00:21:33.898 "ffdhe4096", 00:21:33.898 "ffdhe6144", 00:21:33.898 "ffdhe8192" 00:21:33.898 ] 00:21:33.898 } 00:21:33.898 }, 00:21:33.898 { 00:21:33.898 "method": "bdev_nvme_set_hotplug", 00:21:33.898 "params": { 00:21:33.898 "period_us": 100000, 00:21:33.898 "enable": false 00:21:33.898 } 00:21:33.898 }, 00:21:33.898 { 00:21:33.898 "method": "bdev_malloc_create", 00:21:33.898 "params": { 00:21:33.898 "name": "malloc0", 00:21:33.898 "num_blocks": 8192, 00:21:33.898 "block_size": 4096, 00:21:33.898 "physical_block_size": 4096, 00:21:33.898 "uuid": "661e48e1-6fae-4e91-9719-da4a9f297c84", 00:21:33.898 "optimal_io_boundary": 0, 00:21:33.898 "md_size": 0, 00:21:33.898 "dif_type": 0, 00:21:33.898 "dif_is_head_of_md": false, 00:21:33.898 "dif_pi_format": 0 00:21:33.898 } 00:21:33.898 }, 00:21:33.898 { 00:21:33.898 "method": "bdev_wait_for_examine" 00:21:33.898 } 00:21:33.898 ] 00:21:33.898 }, 00:21:33.898 { 00:21:33.898 "subsystem": "nbd", 00:21:33.898 "config": [] 00:21:33.898 }, 00:21:33.898 { 00:21:33.898 "subsystem": "scheduler", 00:21:33.898 "config": [ 00:21:33.898 { 00:21:33.898 "method": "framework_set_scheduler", 00:21:33.898 "params": { 00:21:33.898 "name": "static" 00:21:33.898 } 00:21:33.898 } 00:21:33.898 ] 00:21:33.898 }, 00:21:33.898 { 00:21:33.898 "subsystem": "nvmf", 00:21:33.898 "config": [ 00:21:33.898 { 00:21:33.898 "method": "nvmf_set_config", 00:21:33.898 "params": { 00:21:33.898 "discovery_filter": "match_any", 00:21:33.898 "admin_cmd_passthru": { 00:21:33.898 "identify_ctrlr": false 00:21:33.898 }, 00:21:33.898 "dhchap_digests": [ 00:21:33.898 "sha256", 00:21:33.898 "sha384", 00:21:33.898 "sha512" 00:21:33.898 ], 00:21:33.898 "dhchap_dhgroups": [ 00:21:33.898 "null", 00:21:33.898 "ffdhe2048", 00:21:33.898 "ffdhe3072", 00:21:33.898 "ffdhe4096", 00:21:33.898 "ffdhe6144", 00:21:33.898 "ffdhe8192" 00:21:33.898 ] 00:21:33.898 } 00:21:33.898 }, 00:21:33.898 { 00:21:33.898 "method": "nvmf_set_max_subsystems", 00:21:33.898 "params": { 00:21:33.898 "max_subsystems": 1024 00:21:33.898 } 00:21:33.898 }, 00:21:33.898 { 00:21:33.898 "method": "nvmf_set_crdt", 00:21:33.898 "params": { 00:21:33.898 "crdt1": 0, 00:21:33.898 "crdt2": 0, 00:21:33.898 "crdt3": 0 00:21:33.898 } 00:21:33.898 }, 00:21:33.898 { 00:21:33.898 "method": "nvmf_create_transport", 00:21:33.898 "params": { 00:21:33.898 "trtype": "TCP", 00:21:33.898 "max_queue_depth": 128, 00:21:33.898 "max_io_qpairs_per_ctrlr": 127, 00:21:33.898 "in_capsule_data_size": 4096, 00:21:33.898 "max_io_size": 131072, 00:21:33.898 "io_unit_size": 131072, 00:21:33.898 "max_aq_depth": 128, 00:21:33.898 "num_shared_buffers": 511, 00:21:33.898 "buf_cache_size": 4294967295, 00:21:33.898 "dif_insert_or_strip": false, 00:21:33.898 "zcopy": false, 00:21:33.898 "c2h_success": false, 00:21:33.898 "sock_priority": 0, 00:21:33.898 "abort_timeout_sec": 1, 00:21:33.898 "ack_timeout": 0, 00:21:33.898 "data_wr_pool_size": 0 00:21:33.898 } 00:21:33.898 }, 00:21:33.898 { 00:21:33.898 "method": "nvmf_create_subsystem", 00:21:33.898 "params": { 00:21:33.898 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.898 "allow_any_host": false, 00:21:33.898 "serial_number": "00000000000000000000", 00:21:33.898 "model_number": "SPDK bdev Controller", 00:21:33.898 "max_namespaces": 32, 00:21:33.898 "min_cntlid": 1, 00:21:33.898 "max_cntlid": 65519, 00:21:33.899 "ana_reporting": false 00:21:33.899 } 00:21:33.899 }, 00:21:33.899 { 00:21:33.899 "method": "nvmf_subsystem_add_host", 00:21:33.899 "params": { 00:21:33.899 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.899 "host": "nqn.2016-06.io.spdk:host1", 00:21:33.899 "psk": "key0" 00:21:33.899 } 00:21:33.899 }, 00:21:33.899 { 00:21:33.899 "method": "nvmf_subsystem_add_ns", 00:21:33.899 "params": { 00:21:33.899 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.899 "namespace": { 00:21:33.899 "nsid": 1, 00:21:33.899 "bdev_name": "malloc0", 00:21:33.899 "nguid": "661E48E16FAE4E919719DA4A9F297C84", 00:21:33.899 "uuid": "661e48e1-6fae-4e91-9719-da4a9f297c84", 00:21:33.899 "no_auto_visible": false 00:21:33.899 } 00:21:33.899 } 00:21:33.899 }, 00:21:33.899 { 00:21:33.899 "method": "nvmf_subsystem_add_listener", 00:21:33.899 "params": { 00:21:33.899 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.899 "listen_address": { 00:21:33.899 "trtype": "TCP", 00:21:33.899 "adrfam": "IPv4", 00:21:33.899 "traddr": "10.0.0.2", 00:21:33.899 "trsvcid": "4420" 00:21:33.899 }, 00:21:33.899 "secure_channel": false, 00:21:33.899 "sock_impl": "ssl" 00:21:33.899 } 00:21:33.899 } 00:21:33.899 ] 00:21:33.899 } 00:21:33.899 ] 00:21:33.899 }' 00:21:33.899 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:33.899 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=390851 00:21:33.899 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:33.899 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 390851 00:21:33.899 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 390851 ']' 00:21:33.899 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.899 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:33.899 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.899 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:33.899 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:33.899 [2024-12-05 20:41:27.207334] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:21:33.899 [2024-12-05 20:41:27.207379] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.899 [2024-12-05 20:41:27.283672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.899 [2024-12-05 20:41:27.321346] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.899 [2024-12-05 20:41:27.321385] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.899 [2024-12-05 20:41:27.321391] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:33.899 [2024-12-05 20:41:27.321397] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:33.899 [2024-12-05 20:41:27.321401] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.899 [2024-12-05 20:41:27.321951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.159 [2024-12-05 20:41:27.534328] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.159 [2024-12-05 20:41:27.566366] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:34.159 [2024-12-05 20:41:27.566572] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.730 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:34.730 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:34.730 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:34.730 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:34.730 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.730 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.730 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=391118 00:21:34.730 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 391118 /var/tmp/bdevperf.sock 00:21:34.730 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 391118 ']' 00:21:34.730 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:34.730 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:34.730 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:34.730 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:34.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:34.730 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:21:34.730 "subsystems": [ 00:21:34.730 { 00:21:34.730 "subsystem": "keyring", 00:21:34.730 "config": [ 00:21:34.730 { 00:21:34.730 "method": "keyring_file_add_key", 00:21:34.730 "params": { 00:21:34.730 "name": "key0", 00:21:34.730 "path": "/tmp/tmp.Z0E6smaUsB" 00:21:34.730 } 00:21:34.730 } 00:21:34.730 ] 00:21:34.730 }, 00:21:34.730 { 00:21:34.730 "subsystem": "iobuf", 00:21:34.730 "config": [ 00:21:34.730 { 00:21:34.730 "method": "iobuf_set_options", 00:21:34.730 "params": { 00:21:34.730 "small_pool_count": 8192, 00:21:34.730 "large_pool_count": 1024, 00:21:34.730 "small_bufsize": 8192, 00:21:34.730 "large_bufsize": 135168, 00:21:34.730 "enable_numa": false 00:21:34.730 } 00:21:34.730 } 00:21:34.730 ] 00:21:34.730 }, 00:21:34.730 { 00:21:34.730 "subsystem": "sock", 00:21:34.730 "config": [ 00:21:34.730 { 00:21:34.730 "method": "sock_set_default_impl", 00:21:34.730 "params": { 00:21:34.730 "impl_name": "posix" 00:21:34.730 } 00:21:34.730 }, 00:21:34.730 { 00:21:34.730 "method": "sock_impl_set_options", 00:21:34.730 "params": { 00:21:34.730 "impl_name": "ssl", 00:21:34.730 "recv_buf_size": 4096, 00:21:34.730 "send_buf_size": 4096, 00:21:34.730 "enable_recv_pipe": true, 00:21:34.730 "enable_quickack": false, 00:21:34.730 "enable_placement_id": 0, 00:21:34.730 "enable_zerocopy_send_server": true, 00:21:34.730 "enable_zerocopy_send_client": false, 00:21:34.730 "zerocopy_threshold": 0, 00:21:34.730 "tls_version": 0, 00:21:34.730 "enable_ktls": false 00:21:34.730 } 00:21:34.730 }, 00:21:34.730 { 00:21:34.730 "method": "sock_impl_set_options", 00:21:34.730 "params": { 00:21:34.730 "impl_name": "posix", 00:21:34.730 "recv_buf_size": 2097152, 00:21:34.730 "send_buf_size": 2097152, 00:21:34.730 "enable_recv_pipe": true, 00:21:34.730 "enable_quickack": false, 00:21:34.730 "enable_placement_id": 0, 00:21:34.730 "enable_zerocopy_send_server": true, 00:21:34.730 "enable_zerocopy_send_client": false, 00:21:34.730 "zerocopy_threshold": 0, 00:21:34.730 "tls_version": 0, 00:21:34.730 "enable_ktls": false 00:21:34.730 } 00:21:34.730 } 00:21:34.730 ] 00:21:34.730 }, 00:21:34.730 { 00:21:34.730 "subsystem": "vmd", 00:21:34.730 "config": [] 00:21:34.730 }, 00:21:34.730 { 00:21:34.730 "subsystem": "accel", 00:21:34.730 "config": [ 00:21:34.730 { 00:21:34.730 "method": "accel_set_options", 00:21:34.730 "params": { 00:21:34.730 "small_cache_size": 128, 00:21:34.730 "large_cache_size": 16, 00:21:34.730 "task_count": 2048, 00:21:34.730 "sequence_count": 2048, 00:21:34.730 "buf_count": 2048 00:21:34.730 } 00:21:34.730 } 00:21:34.730 ] 00:21:34.730 }, 00:21:34.730 { 00:21:34.730 "subsystem": "bdev", 00:21:34.730 "config": [ 00:21:34.730 { 00:21:34.730 "method": "bdev_set_options", 00:21:34.730 "params": { 00:21:34.730 "bdev_io_pool_size": 65535, 00:21:34.730 "bdev_io_cache_size": 256, 00:21:34.730 "bdev_auto_examine": true, 00:21:34.730 "iobuf_small_cache_size": 128, 00:21:34.730 "iobuf_large_cache_size": 16 00:21:34.730 } 00:21:34.730 }, 00:21:34.730 { 00:21:34.730 "method": "bdev_raid_set_options", 00:21:34.730 "params": { 00:21:34.730 "process_window_size_kb": 1024, 00:21:34.730 "process_max_bandwidth_mb_sec": 0 00:21:34.730 } 00:21:34.730 }, 00:21:34.730 { 00:21:34.730 "method": "bdev_iscsi_set_options", 00:21:34.730 "params": { 00:21:34.730 "timeout_sec": 30 00:21:34.730 } 00:21:34.730 }, 00:21:34.730 { 00:21:34.730 "method": "bdev_nvme_set_options", 00:21:34.730 "params": { 00:21:34.730 "action_on_timeout": "none", 00:21:34.730 "timeout_us": 0, 00:21:34.730 "timeout_admin_us": 0, 00:21:34.730 "keep_alive_timeout_ms": 10000, 00:21:34.730 "arbitration_burst": 0, 00:21:34.731 "low_priority_weight": 0, 00:21:34.731 "medium_priority_weight": 0, 00:21:34.731 "high_priority_weight": 0, 00:21:34.731 "nvme_adminq_poll_period_us": 10000, 00:21:34.731 "nvme_ioq_poll_period_us": 0, 00:21:34.731 "io_queue_requests": 512, 00:21:34.731 "delay_cmd_submit": true, 00:21:34.731 "transport_retry_count": 4, 00:21:34.731 "bdev_retry_count": 3, 00:21:34.731 "transport_ack_timeout": 0, 00:21:34.731 "ctrlr_loss_timeout_sec": 0, 00:21:34.731 "reconnect_delay_sec": 0, 00:21:34.731 "fast_io_fail_timeout_sec": 0, 00:21:34.731 "disable_auto_failback": false, 00:21:34.731 "generate_uuids": false, 00:21:34.731 "transport_tos": 0, 00:21:34.731 "nvme_error_stat": false, 00:21:34.731 "rdma_srq_size": 0, 00:21:34.731 "io_path_stat": false, 00:21:34.731 "allow_accel_sequence": false, 00:21:34.731 "rdma_max_cq_size": 0, 00:21:34.731 "rdma_cm_event_timeout_ms": 0, 00:21:34.731 "dhchap_digests": [ 00:21:34.731 "sha256", 00:21:34.731 "sha384", 00:21:34.731 "sha512" 00:21:34.731 ], 00:21:34.731 "dhchap_dhgroups": [ 00:21:34.731 "null", 00:21:34.731 "ffdhe2048", 00:21:34.731 "ffdhe3072", 00:21:34.731 "ffdhe4096", 00:21:34.731 "ffdhe6144", 00:21:34.731 "ffdhe8192" 00:21:34.731 ] 00:21:34.731 } 00:21:34.731 }, 00:21:34.731 { 00:21:34.731 "method": "bdev_nvme_attach_controller", 00:21:34.731 "params": { 00:21:34.731 "name": "nvme0", 00:21:34.731 "trtype": "TCP", 00:21:34.731 "adrfam": "IPv4", 00:21:34.731 "traddr": "10.0.0.2", 00:21:34.731 "trsvcid": "4420", 00:21:34.731 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.731 "prchk_reftag": false, 00:21:34.731 "prchk_guard": false, 00:21:34.731 "ctrlr_loss_timeout_sec": 0, 00:21:34.731 "reconnect_delay_sec": 0, 00:21:34.731 "fast_io_fail_timeout_sec": 0, 00:21:34.731 "psk": "key0", 00:21:34.731 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:34.731 "hdgst": false, 00:21:34.731 "ddgst": false, 00:21:34.731 "multipath": "multipath" 00:21:34.731 } 00:21:34.731 }, 00:21:34.731 { 00:21:34.731 "method": "bdev_nvme_set_hotplug", 00:21:34.731 "params": { 00:21:34.731 "period_us": 100000, 00:21:34.731 "enable": false 00:21:34.731 } 00:21:34.731 }, 00:21:34.731 { 00:21:34.731 "method": "bdev_enable_histogram", 00:21:34.731 "params": { 00:21:34.731 "name": "nvme0n1", 00:21:34.731 "enable": true 00:21:34.731 } 00:21:34.731 }, 00:21:34.731 { 00:21:34.731 "method": "bdev_wait_for_examine" 00:21:34.731 } 00:21:34.731 ] 00:21:34.731 }, 00:21:34.731 { 00:21:34.731 "subsystem": "nbd", 00:21:34.731 "config": [] 00:21:34.731 } 00:21:34.731 ] 00:21:34.731 }' 00:21:34.731 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:34.731 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.731 [2024-12-05 20:41:28.097938] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:21:34.731 [2024-12-05 20:41:28.097983] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid391118 ] 00:21:34.731 [2024-12-05 20:41:28.167968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.991 [2024-12-05 20:41:28.205697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.991 [2024-12-05 20:41:28.357219] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:35.560 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:35.560 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:35.560 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:35.560 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:21:35.818 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.818 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:35.818 Running I/O for 1 seconds... 00:21:37.015 5354.00 IOPS, 20.91 MiB/s 00:21:37.015 Latency(us) 00:21:37.015 [2024-12-05T19:41:30.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.015 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:37.015 Verification LBA range: start 0x0 length 0x2000 00:21:37.015 nvme0n1 : 1.01 5416.66 21.16 0.00 0.00 23482.58 5183.30 29550.78 00:21:37.015 [2024-12-05T19:41:30.456Z] =================================================================================================================== 00:21:37.015 [2024-12-05T19:41:30.456Z] Total : 5416.66 21.16 0.00 0.00 23482.58 5183.30 29550.78 00:21:37.015 { 00:21:37.015 "results": [ 00:21:37.015 { 00:21:37.015 "job": "nvme0n1", 00:21:37.015 "core_mask": "0x2", 00:21:37.015 "workload": "verify", 00:21:37.015 "status": "finished", 00:21:37.015 "verify_range": { 00:21:37.015 "start": 0, 00:21:37.015 "length": 8192 00:21:37.015 }, 00:21:37.015 "queue_depth": 128, 00:21:37.015 "io_size": 4096, 00:21:37.015 "runtime": 1.012063, 00:21:37.015 "iops": 5416.65884436048, 00:21:37.015 "mibps": 21.158823610783124, 00:21:37.015 "io_failed": 0, 00:21:37.015 "io_timeout": 0, 00:21:37.015 "avg_latency_us": 23482.579517760605, 00:21:37.015 "min_latency_us": 5183.301818181818, 00:21:37.015 "max_latency_us": 29550.778181818183 00:21:37.015 } 00:21:37.015 ], 00:21:37.015 "core_count": 1 00:21:37.015 } 00:21:37.015 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:21:37.015 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:21:37.015 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:37.015 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:21:37.015 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:21:37.015 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:37.015 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:37.015 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:37.015 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:37.015 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:37.015 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:37.015 nvmf_trace.0 00:21:37.015 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:21:37.015 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 391118 00:21:37.015 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 391118 ']' 00:21:37.015 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 391118 00:21:37.015 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:37.015 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.015 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 391118 00:21:37.015 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:37.015 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:37.015 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 391118' 00:21:37.015 killing process with pid 391118 00:21:37.015 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 391118 00:21:37.015 Received shutdown signal, test time was about 1.000000 seconds 00:21:37.015 00:21:37.015 Latency(us) 00:21:37.015 [2024-12-05T19:41:30.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.015 [2024-12-05T19:41:30.456Z] =================================================================================================================== 00:21:37.015 [2024-12-05T19:41:30.456Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:37.015 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 391118 00:21:37.273 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:37.273 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:37.273 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:21:37.273 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:37.273 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:21:37.273 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:37.273 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:37.273 rmmod nvme_tcp 00:21:37.273 rmmod nvme_fabrics 00:21:37.273 rmmod nvme_keyring 00:21:37.273 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:37.273 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:21:37.273 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:21:37.273 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 390851 ']' 00:21:37.273 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 390851 00:21:37.273 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 390851 ']' 00:21:37.273 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 390851 00:21:37.273 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:37.273 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.273 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 390851 00:21:37.273 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:37.273 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:37.273 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 390851' 00:21:37.273 killing process with pid 390851 00:21:37.273 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 390851 00:21:37.273 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 390851 00:21:37.592 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:37.592 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:37.592 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:37.592 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:21:37.592 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:21:37.592 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:37.592 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:21:37.592 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:37.592 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:37.592 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.592 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:37.592 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.495 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:39.495 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.GRhkxAw9Hl /tmp/tmp.lhIJyUZmFK /tmp/tmp.Z0E6smaUsB 00:21:39.495 00:21:39.495 real 1m21.692s 00:21:39.495 user 2m3.487s 00:21:39.495 sys 0m30.908s 00:21:39.495 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:39.495 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.495 ************************************ 00:21:39.495 END TEST nvmf_tls 00:21:39.495 ************************************ 00:21:39.495 20:41:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:39.495 20:41:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:39.495 20:41:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:39.495 20:41:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:39.777 ************************************ 00:21:39.777 START TEST nvmf_fips 00:21:39.777 ************************************ 00:21:39.777 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:39.777 * Looking for test storage... 00:21:39.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:39.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.777 --rc genhtml_branch_coverage=1 00:21:39.777 --rc genhtml_function_coverage=1 00:21:39.777 --rc genhtml_legend=1 00:21:39.777 --rc geninfo_all_blocks=1 00:21:39.777 --rc geninfo_unexecuted_blocks=1 00:21:39.777 00:21:39.777 ' 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:39.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.777 --rc genhtml_branch_coverage=1 00:21:39.777 --rc genhtml_function_coverage=1 00:21:39.777 --rc genhtml_legend=1 00:21:39.777 --rc geninfo_all_blocks=1 00:21:39.777 --rc geninfo_unexecuted_blocks=1 00:21:39.777 00:21:39.777 ' 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:39.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.777 --rc genhtml_branch_coverage=1 00:21:39.777 --rc genhtml_function_coverage=1 00:21:39.777 --rc genhtml_legend=1 00:21:39.777 --rc geninfo_all_blocks=1 00:21:39.777 --rc geninfo_unexecuted_blocks=1 00:21:39.777 00:21:39.777 ' 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:39.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.777 --rc genhtml_branch_coverage=1 00:21:39.777 --rc genhtml_function_coverage=1 00:21:39.777 --rc genhtml_legend=1 00:21:39.777 --rc geninfo_all_blocks=1 00:21:39.777 --rc geninfo_unexecuted_blocks=1 00:21:39.777 00:21:39.777 ' 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:39.777 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:39.778 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:21:39.778 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:21:40.038 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:40.038 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:40.038 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:40.038 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:21:40.038 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:40.038 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:40.038 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:21:40.038 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:40.038 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:40.038 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:40.038 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:40.038 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:40.038 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:40.038 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:21:40.038 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:21:40.038 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:40.038 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:21:40.038 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:21:40.038 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:40.038 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:21:40.038 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:21:40.038 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:40.038 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:21:40.038 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:40.038 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:40.038 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:21:40.038 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:21:40.038 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:21:40.038 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:21:40.039 Error setting digest 00:21:40.039 402234B32C7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:21:40.039 402234B32C7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:21:40.039 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:46.617 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:46.617 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:46.617 Found net devices under 0000:af:00.0: cvl_0_0 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:46.617 Found net devices under 0000:af:00.1: cvl_0_1 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:46.617 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:46.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:46.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.426 ms 00:21:46.618 00:21:46.618 --- 10.0.0.2 ping statistics --- 00:21:46.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.618 rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:46.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:46.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:21:46.618 00:21:46.618 --- 10.0.0.1 ping statistics --- 00:21:46.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.618 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=395330 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 395330 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 395330 ']' 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:46.618 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:46.618 [2024-12-05 20:41:39.455207] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:21:46.618 [2024-12-05 20:41:39.455257] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.618 [2024-12-05 20:41:39.527834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.618 [2024-12-05 20:41:39.565960] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.618 [2024-12-05 20:41:39.565993] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.618 [2024-12-05 20:41:39.565999] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:46.618 [2024-12-05 20:41:39.566004] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:46.618 [2024-12-05 20:41:39.566009] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.618 [2024-12-05 20:41:39.566542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.878 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:46.878 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:46.878 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:46.878 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:46.878 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:46.878 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.878 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:46.878 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:46.878 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:21:46.878 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.9WD 00:21:46.878 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:46.878 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.9WD 00:21:46.878 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.9WD 00:21:46.878 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.9WD 00:21:46.878 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:47.138 [2024-12-05 20:41:40.450265] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.138 [2024-12-05 20:41:40.466268] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:47.138 [2024-12-05 20:41:40.466435] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:47.138 malloc0 00:21:47.138 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:47.138 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=395448 00:21:47.138 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:47.138 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 395448 /var/tmp/bdevperf.sock 00:21:47.138 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 395448 ']' 00:21:47.138 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:47.138 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:47.138 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:47.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:47.138 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:47.138 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:47.398 [2024-12-05 20:41:40.593800] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:21:47.398 [2024-12-05 20:41:40.593854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid395448 ] 00:21:47.398 [2024-12-05 20:41:40.665485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.398 [2024-12-05 20:41:40.703599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.968 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:47.968 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:47.968 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.9WD 00:21:48.227 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:48.487 [2024-12-05 20:41:41.732424] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:48.487 TLSTESTn1 00:21:48.487 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:48.487 Running I/O for 10 seconds... 00:21:50.803 4971.00 IOPS, 19.42 MiB/s [2024-12-05T19:41:45.182Z] 5351.00 IOPS, 20.90 MiB/s [2024-12-05T19:41:46.117Z] 5330.33 IOPS, 20.82 MiB/s [2024-12-05T19:41:47.054Z] 5361.25 IOPS, 20.94 MiB/s [2024-12-05T19:41:47.992Z] 5424.20 IOPS, 21.19 MiB/s [2024-12-05T19:41:48.930Z] 5516.33 IOPS, 21.55 MiB/s [2024-12-05T19:41:50.304Z] 5580.71 IOPS, 21.80 MiB/s [2024-12-05T19:41:51.239Z] 5623.50 IOPS, 21.97 MiB/s [2024-12-05T19:41:52.175Z] 5634.11 IOPS, 22.01 MiB/s [2024-12-05T19:41:52.175Z] 5664.20 IOPS, 22.13 MiB/s 00:21:58.734 Latency(us) 00:21:58.734 [2024-12-05T19:41:52.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.734 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:58.734 Verification LBA range: start 0x0 length 0x2000 00:21:58.734 TLSTESTn1 : 10.01 5669.96 22.15 0.00 0.00 22543.82 5034.36 70063.94 00:21:58.734 [2024-12-05T19:41:52.175Z] =================================================================================================================== 00:21:58.734 [2024-12-05T19:41:52.175Z] Total : 5669.96 22.15 0.00 0.00 22543.82 5034.36 70063.94 00:21:58.734 { 00:21:58.734 "results": [ 00:21:58.734 { 00:21:58.734 "job": "TLSTESTn1", 00:21:58.734 "core_mask": "0x4", 00:21:58.734 "workload": "verify", 00:21:58.734 "status": "finished", 00:21:58.734 "verify_range": { 00:21:58.734 "start": 0, 00:21:58.734 "length": 8192 00:21:58.734 }, 00:21:58.734 "queue_depth": 128, 00:21:58.734 "io_size": 4096, 00:21:58.734 "runtime": 10.012232, 00:21:58.734 "iops": 5669.964499424304, 00:21:58.734 "mibps": 22.14829882587619, 00:21:58.734 "io_failed": 0, 00:21:58.734 "io_timeout": 0, 00:21:58.734 "avg_latency_us": 22543.820321910647, 00:21:58.734 "min_latency_us": 5034.356363636363, 00:21:58.734 "max_latency_us": 70063.94181818182 00:21:58.734 } 00:21:58.734 ], 00:21:58.734 "core_count": 1 00:21:58.734 } 00:21:58.734 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:58.734 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:58.734 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:21:58.734 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:21:58.734 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:58.734 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:58.734 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:58.734 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:58.734 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:58.734 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:58.734 nvmf_trace.0 00:21:58.734 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:21:58.734 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 395448 00:21:58.734 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 395448 ']' 00:21:58.735 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 395448 00:21:58.735 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:58.735 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:58.735 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 395448 00:21:58.735 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:58.735 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:58.735 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 395448' 00:21:58.735 killing process with pid 395448 00:21:58.735 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 395448 00:21:58.735 Received shutdown signal, test time was about 10.000000 seconds 00:21:58.735 00:21:58.735 Latency(us) 00:21:58.735 [2024-12-05T19:41:52.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.735 [2024-12-05T19:41:52.176Z] =================================================================================================================== 00:21:58.735 [2024-12-05T19:41:52.176Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:58.735 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 395448 00:21:58.993 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:58.993 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:58.993 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:58.993 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:58.993 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:58.993 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:58.993 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:58.993 rmmod nvme_tcp 00:21:58.993 rmmod nvme_fabrics 00:21:58.993 rmmod nvme_keyring 00:21:58.993 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:58.993 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:58.993 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:58.993 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 395330 ']' 00:21:58.993 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 395330 00:21:58.993 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 395330 ']' 00:21:58.993 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 395330 00:21:58.993 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:58.993 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:58.993 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 395330 00:21:58.993 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:58.993 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:58.993 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 395330' 00:21:58.993 killing process with pid 395330 00:21:58.993 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 395330 00:21:58.993 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 395330 00:21:59.253 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:59.253 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:59.253 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:59.253 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:59.253 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:21:59.253 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:59.253 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:21:59.253 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:59.253 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:59.253 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.253 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.253 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.9WD 00:22:01.794 00:22:01.794 real 0m21.658s 00:22:01.794 user 0m23.251s 00:22:01.794 sys 0m9.634s 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:01.794 ************************************ 00:22:01.794 END TEST nvmf_fips 00:22:01.794 ************************************ 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:01.794 ************************************ 00:22:01.794 START TEST nvmf_control_msg_list 00:22:01.794 ************************************ 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:01.794 * Looking for test storage... 00:22:01.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:01.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.794 --rc genhtml_branch_coverage=1 00:22:01.794 --rc genhtml_function_coverage=1 00:22:01.794 --rc genhtml_legend=1 00:22:01.794 --rc geninfo_all_blocks=1 00:22:01.794 --rc geninfo_unexecuted_blocks=1 00:22:01.794 00:22:01.794 ' 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:01.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.794 --rc genhtml_branch_coverage=1 00:22:01.794 --rc genhtml_function_coverage=1 00:22:01.794 --rc genhtml_legend=1 00:22:01.794 --rc geninfo_all_blocks=1 00:22:01.794 --rc geninfo_unexecuted_blocks=1 00:22:01.794 00:22:01.794 ' 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:01.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.794 --rc genhtml_branch_coverage=1 00:22:01.794 --rc genhtml_function_coverage=1 00:22:01.794 --rc genhtml_legend=1 00:22:01.794 --rc geninfo_all_blocks=1 00:22:01.794 --rc geninfo_unexecuted_blocks=1 00:22:01.794 00:22:01.794 ' 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:01.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.794 --rc genhtml_branch_coverage=1 00:22:01.794 --rc genhtml_function_coverage=1 00:22:01.794 --rc genhtml_legend=1 00:22:01.794 --rc geninfo_all_blocks=1 00:22:01.794 --rc geninfo_unexecuted_blocks=1 00:22:01.794 00:22:01.794 ' 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:01.794 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:01.795 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:22:01.795 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:08.370 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:08.370 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:22:08.370 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:08.370 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:08.370 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:08.370 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:08.370 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:08.370 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:22:08.370 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:08.370 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:08.371 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:08.371 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:08.371 Found net devices under 0000:af:00.0: cvl_0_0 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:08.371 Found net devices under 0000:af:00.1: cvl_0_1 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:08.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:08.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:22:08.371 00:22:08.371 --- 10.0.0.2 ping statistics --- 00:22:08.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.371 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:22:08.371 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:08.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:08.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:22:08.371 00:22:08.371 --- 10.0.0.1 ping statistics --- 00:22:08.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.372 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:22:08.372 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:08.372 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:22:08.372 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:08.372 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:08.372 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:08.372 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:08.372 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:08.372 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:08.372 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:08.372 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:22:08.372 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:08.372 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:08.372 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:08.372 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=401320 00:22:08.372 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 401320 00:22:08.372 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:08.372 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 401320 ']' 00:22:08.372 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.372 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:08.372 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.372 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:08.372 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:08.372 [2024-12-05 20:42:00.915668] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:22:08.372 [2024-12-05 20:42:00.915713] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:08.372 [2024-12-05 20:42:00.992970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.372 [2024-12-05 20:42:01.031886] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:08.372 [2024-12-05 20:42:01.031919] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:08.372 [2024-12-05 20:42:01.031926] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:08.372 [2024-12-05 20:42:01.031932] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:08.372 [2024-12-05 20:42:01.031936] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:08.372 [2024-12-05 20:42:01.032486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:08.372 [2024-12-05 20:42:01.166730] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:08.372 Malloc0 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:08.372 [2024-12-05 20:42:01.206662] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=401362 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=401363 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=401364 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 401362 00:22:08.372 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:08.372 [2024-12-05 20:42:01.291332] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:08.372 [2024-12-05 20:42:01.291522] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:08.372 [2024-12-05 20:42:01.291672] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:08.942 Initializing NVMe Controllers 00:22:08.942 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:08.942 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:22:08.942 Initialization complete. Launching workers. 00:22:08.942 ======================================================== 00:22:08.942 Latency(us) 00:22:08.942 Device Information : IOPS MiB/s Average min max 00:22:08.943 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 6709.93 26.21 148.72 119.48 340.72 00:22:08.943 ======================================================== 00:22:08.943 Total : 6709.93 26.21 148.72 119.48 340.72 00:22:08.943 00:22:09.203 Initializing NVMe Controllers 00:22:09.203 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:09.203 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:22:09.203 Initialization complete. Launching workers. 00:22:09.203 ======================================================== 00:22:09.203 Latency(us) 00:22:09.203 Device Information : IOPS MiB/s Average min max 00:22:09.203 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 41049.10 40615.18 41905.93 00:22:09.203 ======================================================== 00:22:09.203 Total : 25.00 0.10 41049.10 40615.18 41905.93 00:22:09.204 00:22:09.204 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 401363 00:22:09.204 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 401364 00:22:09.204 Initializing NVMe Controllers 00:22:09.204 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:09.204 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:22:09.204 Initialization complete. Launching workers. 00:22:09.204 ======================================================== 00:22:09.204 Latency(us) 00:22:09.204 Device Information : IOPS MiB/s Average min max 00:22:09.204 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 6718.00 26.24 148.54 115.98 380.43 00:22:09.204 ======================================================== 00:22:09.204 Total : 6718.00 26.24 148.54 115.98 380.43 00:22:09.204 00:22:09.204 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:09.204 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:22:09.204 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:09.204 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:22:09.204 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:09.204 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:22:09.204 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:09.204 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:09.204 rmmod nvme_tcp 00:22:09.204 rmmod nvme_fabrics 00:22:09.204 rmmod nvme_keyring 00:22:09.204 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:09.204 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:22:09.204 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:22:09.204 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 401320 ']' 00:22:09.204 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 401320 00:22:09.204 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 401320 ']' 00:22:09.204 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 401320 00:22:09.204 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:22:09.204 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:09.204 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 401320 00:22:09.463 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:09.463 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:09.464 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 401320' 00:22:09.464 killing process with pid 401320 00:22:09.464 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 401320 00:22:09.464 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 401320 00:22:09.464 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:09.464 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:09.464 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:09.464 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:22:09.464 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:22:09.464 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:09.464 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:22:09.464 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:09.464 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:09.464 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.464 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:09.464 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.003 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:12.003 00:22:12.003 real 0m10.218s 00:22:12.003 user 0m6.884s 00:22:12.003 sys 0m5.463s 00:22:12.003 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:12.003 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:12.003 ************************************ 00:22:12.003 END TEST nvmf_control_msg_list 00:22:12.003 ************************************ 00:22:12.003 20:42:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:12.003 20:42:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:12.003 20:42:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:12.003 20:42:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:12.003 ************************************ 00:22:12.003 START TEST nvmf_wait_for_buf 00:22:12.003 ************************************ 00:22:12.003 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:12.003 * Looking for test storage... 00:22:12.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:12.003 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:12.003 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:22:12.003 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:12.003 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:12.003 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:12.003 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:12.003 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:12.003 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:22:12.003 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:22:12.003 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:22:12.003 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:22:12.003 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:22:12.003 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:22:12.003 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:12.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.004 --rc genhtml_branch_coverage=1 00:22:12.004 --rc genhtml_function_coverage=1 00:22:12.004 --rc genhtml_legend=1 00:22:12.004 --rc geninfo_all_blocks=1 00:22:12.004 --rc geninfo_unexecuted_blocks=1 00:22:12.004 00:22:12.004 ' 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:12.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.004 --rc genhtml_branch_coverage=1 00:22:12.004 --rc genhtml_function_coverage=1 00:22:12.004 --rc genhtml_legend=1 00:22:12.004 --rc geninfo_all_blocks=1 00:22:12.004 --rc geninfo_unexecuted_blocks=1 00:22:12.004 00:22:12.004 ' 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:12.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.004 --rc genhtml_branch_coverage=1 00:22:12.004 --rc genhtml_function_coverage=1 00:22:12.004 --rc genhtml_legend=1 00:22:12.004 --rc geninfo_all_blocks=1 00:22:12.004 --rc geninfo_unexecuted_blocks=1 00:22:12.004 00:22:12.004 ' 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:12.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.004 --rc genhtml_branch_coverage=1 00:22:12.004 --rc genhtml_function_coverage=1 00:22:12.004 --rc genhtml_legend=1 00:22:12.004 --rc geninfo_all_blocks=1 00:22:12.004 --rc geninfo_unexecuted_blocks=1 00:22:12.004 00:22:12.004 ' 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:12.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:12.004 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:22:12.005 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:12.005 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:12.005 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:12.005 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:12.005 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:12.005 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.005 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:12.005 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.005 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:12.005 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:12.005 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:12.005 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:18.575 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:18.575 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:18.575 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:18.575 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:18.575 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:18.575 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:18.575 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:18.575 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:18.576 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:18.576 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:18.576 Found net devices under 0000:af:00.0: cvl_0_0 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:18.576 Found net devices under 0000:af:00.1: cvl_0_1 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:18.576 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:18.576 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:18.576 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:18.576 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:18.576 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:18.576 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:18.576 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:18.576 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:18.576 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:18.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:18.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:22:18.576 00:22:18.576 --- 10.0.0.2 ping statistics --- 00:22:18.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.576 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:22:18.576 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:18.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:18.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:22:18.576 00:22:18.576 --- 10.0.0.1 ping statistics --- 00:22:18.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.576 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:22:18.576 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:18.577 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:22:18.577 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:18.577 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:18.577 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:18.577 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:18.577 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:18.577 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:18.577 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:18.577 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:22:18.577 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:18.577 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:18.577 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:18.577 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=405794 00:22:18.577 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 405794 00:22:18.577 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:18.577 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 405794 ']' 00:22:18.577 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.577 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:18.577 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.577 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:18.577 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:18.577 [2024-12-05 20:42:11.235372] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:22:18.577 [2024-12-05 20:42:11.235413] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.577 [2024-12-05 20:42:11.309667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.577 [2024-12-05 20:42:11.347362] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:18.577 [2024-12-05 20:42:11.347396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:18.577 [2024-12-05 20:42:11.347402] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:18.577 [2024-12-05 20:42:11.347408] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:18.577 [2024-12-05 20:42:11.347412] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:18.577 [2024-12-05 20:42:11.347953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.836 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:18.836 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:22:18.836 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:18.837 Malloc0 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:18.837 [2024-12-05 20:42:12.175146] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:18.837 [2024-12-05 20:42:12.203327] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.837 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:19.097 [2024-12-05 20:42:12.286133] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:20.478 Initializing NVMe Controllers 00:22:20.478 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:20.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:22:20.478 Initialization complete. Launching workers. 00:22:20.478 ======================================================== 00:22:20.478 Latency(us) 00:22:20.478 Device Information : IOPS MiB/s Average min max 00:22:20.478 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 124.00 15.50 33541.02 29518.72 71139.20 00:22:20.478 ======================================================== 00:22:20.478 Total : 124.00 15.50 33541.02 29518.72 71139.20 00:22:20.478 00:22:20.478 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:22:20.478 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:22:20.478 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.478 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:20.478 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.478 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1958 00:22:20.478 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1958 -eq 0 ]] 00:22:20.478 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:20.478 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:22:20.478 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:20.478 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:22:20.478 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:20.478 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:22:20.478 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:20.478 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:20.478 rmmod nvme_tcp 00:22:20.478 rmmod nvme_fabrics 00:22:20.478 rmmod nvme_keyring 00:22:20.478 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:20.478 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:22:20.478 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:22:20.478 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 405794 ']' 00:22:20.478 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 405794 00:22:20.478 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 405794 ']' 00:22:20.478 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 405794 00:22:20.478 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:22:20.478 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:20.478 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 405794 00:22:20.478 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:20.478 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:20.478 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 405794' 00:22:20.478 killing process with pid 405794 00:22:20.478 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 405794 00:22:20.478 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 405794 00:22:20.738 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:20.738 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:20.738 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:20.738 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:22:20.738 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:22:20.738 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:20.738 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:22:20.738 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:20.738 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:20.738 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.738 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:20.738 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.646 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:22.905 00:22:22.905 real 0m11.098s 00:22:22.905 user 0m4.781s 00:22:22.905 sys 0m4.906s 00:22:22.905 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:22.905 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:22.905 ************************************ 00:22:22.905 END TEST nvmf_wait_for_buf 00:22:22.905 ************************************ 00:22:22.905 20:42:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:22:22.905 20:42:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:22:22.905 20:42:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:22:22.905 20:42:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:22:22.905 20:42:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:22:22.905 20:42:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:29.482 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:29.482 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:29.483 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:29.483 Found net devices under 0000:af:00.0: cvl_0_0 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:29.483 Found net devices under 0000:af:00.1: cvl_0_1 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:29.483 ************************************ 00:22:29.483 START TEST nvmf_perf_adq 00:22:29.483 ************************************ 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:29.483 * Looking for test storage... 00:22:29.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:22:29.483 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:29.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.483 --rc genhtml_branch_coverage=1 00:22:29.483 --rc genhtml_function_coverage=1 00:22:29.483 --rc genhtml_legend=1 00:22:29.483 --rc geninfo_all_blocks=1 00:22:29.483 --rc geninfo_unexecuted_blocks=1 00:22:29.483 00:22:29.483 ' 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:29.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.483 --rc genhtml_branch_coverage=1 00:22:29.483 --rc genhtml_function_coverage=1 00:22:29.483 --rc genhtml_legend=1 00:22:29.483 --rc geninfo_all_blocks=1 00:22:29.483 --rc geninfo_unexecuted_blocks=1 00:22:29.483 00:22:29.483 ' 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:29.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.483 --rc genhtml_branch_coverage=1 00:22:29.483 --rc genhtml_function_coverage=1 00:22:29.483 --rc genhtml_legend=1 00:22:29.483 --rc geninfo_all_blocks=1 00:22:29.483 --rc geninfo_unexecuted_blocks=1 00:22:29.483 00:22:29.483 ' 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:29.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.483 --rc genhtml_branch_coverage=1 00:22:29.483 --rc genhtml_function_coverage=1 00:22:29.483 --rc genhtml_legend=1 00:22:29.483 --rc geninfo_all_blocks=1 00:22:29.483 --rc geninfo_unexecuted_blocks=1 00:22:29.483 00:22:29.483 ' 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:29.483 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:29.484 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:29.484 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:29.484 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:22:29.484 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:29.484 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:29.484 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:29.484 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.484 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.484 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.484 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:29.484 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.484 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:22:29.484 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:29.484 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:29.484 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:29.484 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:29.484 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:29.484 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:29.484 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:29.484 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:29.484 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:29.484 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:29.484 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:29.484 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:29.484 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:34.777 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:34.777 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:34.777 Found net devices under 0000:af:00.0: cvl_0_0 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:34.777 Found net devices under 0000:af:00.1: cvl_0_1 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:34.777 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:35.716 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:39.025 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:44.307 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:44.308 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:44.308 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:44.308 Found net devices under 0000:af:00.0: cvl_0_0 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:44.308 Found net devices under 0000:af:00.1: cvl_0_1 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:44.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:44.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:22:44.308 00:22:44.308 --- 10.0.0.2 ping statistics --- 00:22:44.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.308 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:44.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:44.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:22:44.308 00:22:44.308 --- 10.0.0.1 ping statistics --- 00:22:44.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.308 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=414744 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 414744 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 414744 ']' 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:44.308 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:44.308 [2024-12-05 20:42:37.520548] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:22:44.308 [2024-12-05 20:42:37.520589] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.308 [2024-12-05 20:42:37.593256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:44.309 [2024-12-05 20:42:37.634377] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.309 [2024-12-05 20:42:37.634413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.309 [2024-12-05 20:42:37.634420] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.309 [2024-12-05 20:42:37.634425] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.309 [2024-12-05 20:42:37.634430] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.309 [2024-12-05 20:42:37.638080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.309 [2024-12-05 20:42:37.638107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:44.309 [2024-12-05 20:42:37.638233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.309 [2024-12-05 20:42:37.638233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:45.249 [2024-12-05 20:42:38.505039] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:45.249 Malloc1 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:45.249 [2024-12-05 20:42:38.564473] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=415027 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:22:45.249 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:47.158 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:22:47.158 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.159 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:47.159 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.159 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:22:47.159 "tick_rate": 2200000000, 00:22:47.159 "poll_groups": [ 00:22:47.159 { 00:22:47.159 "name": "nvmf_tgt_poll_group_000", 00:22:47.159 "admin_qpairs": 1, 00:22:47.159 "io_qpairs": 1, 00:22:47.159 "current_admin_qpairs": 1, 00:22:47.159 "current_io_qpairs": 1, 00:22:47.159 "pending_bdev_io": 0, 00:22:47.159 "completed_nvme_io": 21260, 00:22:47.159 "transports": [ 00:22:47.159 { 00:22:47.159 "trtype": "TCP" 00:22:47.159 } 00:22:47.159 ] 00:22:47.159 }, 00:22:47.159 { 00:22:47.159 "name": "nvmf_tgt_poll_group_001", 00:22:47.159 "admin_qpairs": 0, 00:22:47.159 "io_qpairs": 1, 00:22:47.159 "current_admin_qpairs": 0, 00:22:47.159 "current_io_qpairs": 1, 00:22:47.159 "pending_bdev_io": 0, 00:22:47.159 "completed_nvme_io": 21209, 00:22:47.159 "transports": [ 00:22:47.159 { 00:22:47.159 "trtype": "TCP" 00:22:47.159 } 00:22:47.159 ] 00:22:47.159 }, 00:22:47.159 { 00:22:47.159 "name": "nvmf_tgt_poll_group_002", 00:22:47.159 "admin_qpairs": 0, 00:22:47.159 "io_qpairs": 1, 00:22:47.159 "current_admin_qpairs": 0, 00:22:47.159 "current_io_qpairs": 1, 00:22:47.159 "pending_bdev_io": 0, 00:22:47.159 "completed_nvme_io": 21682, 00:22:47.159 "transports": [ 00:22:47.159 { 00:22:47.159 "trtype": "TCP" 00:22:47.159 } 00:22:47.159 ] 00:22:47.159 }, 00:22:47.159 { 00:22:47.159 "name": "nvmf_tgt_poll_group_003", 00:22:47.159 "admin_qpairs": 0, 00:22:47.159 "io_qpairs": 1, 00:22:47.159 "current_admin_qpairs": 0, 00:22:47.159 "current_io_qpairs": 1, 00:22:47.159 "pending_bdev_io": 0, 00:22:47.159 "completed_nvme_io": 21156, 00:22:47.159 "transports": [ 00:22:47.159 { 00:22:47.159 "trtype": "TCP" 00:22:47.159 } 00:22:47.159 ] 00:22:47.159 } 00:22:47.159 ] 00:22:47.159 }' 00:22:47.159 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:47.159 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:22:47.418 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:22:47.418 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:22:47.418 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 415027 00:22:55.546 Initializing NVMe Controllers 00:22:55.546 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:55.546 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:55.546 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:55.546 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:55.546 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:55.546 Initialization complete. Launching workers. 00:22:55.546 ======================================================== 00:22:55.546 Latency(us) 00:22:55.546 Device Information : IOPS MiB/s Average min max 00:22:55.546 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11323.07 44.23 5651.79 2203.55 9739.26 00:22:55.546 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11270.87 44.03 5677.98 1770.16 10234.70 00:22:55.546 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11495.36 44.90 5579.06 1820.54 43378.73 00:22:55.546 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11353.77 44.35 5636.36 2020.27 9844.51 00:22:55.547 ======================================================== 00:22:55.547 Total : 45443.06 177.51 5636.03 1770.16 43378.73 00:22:55.547 00:22:55.547 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:22:55.547 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:55.547 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:55.547 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:55.547 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:55.547 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:55.547 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:55.547 rmmod nvme_tcp 00:22:55.547 rmmod nvme_fabrics 00:22:55.547 rmmod nvme_keyring 00:22:55.547 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:55.547 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:55.547 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:55.547 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 414744 ']' 00:22:55.547 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 414744 00:22:55.547 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 414744 ']' 00:22:55.547 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 414744 00:22:55.547 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:55.547 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:55.547 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 414744 00:22:55.547 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:55.547 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:55.547 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 414744' 00:22:55.547 killing process with pid 414744 00:22:55.547 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 414744 00:22:55.547 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 414744 00:22:55.806 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:55.806 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:55.806 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:55.806 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:55.806 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:55.806 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:55.806 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:55.806 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:55.806 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:55.806 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.806 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:55.806 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.709 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:57.709 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:57.709 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:57.968 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:58.905 20:42:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:01.441 20:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:23:06.717 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:23:06.717 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:06.717 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.717 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:06.717 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:06.717 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:06.717 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.717 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:06.717 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.717 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:06.717 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:06.717 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:06.717 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:06.717 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:06.717 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:06.717 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:06.717 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:06.717 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:06.717 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:06.717 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:06.717 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:06.718 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:06.718 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:06.718 Found net devices under 0000:af:00.0: cvl_0_0 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:06.718 Found net devices under 0000:af:00.1: cvl_0_1 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:06.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:06.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:23:06.718 00:23:06.718 --- 10.0.0.2 ping statistics --- 00:23:06.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.718 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:06.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:06.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:23:06.718 00:23:06.718 --- 10.0.0.1 ping statistics --- 00:23:06.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.718 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:23:06.718 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:06.719 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:23:06.719 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:06.719 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:06.719 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:06.719 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:06.719 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:06.719 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:06.719 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:06.719 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:23:06.719 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:06.719 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:06.719 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:06.719 net.core.busy_poll = 1 00:23:06.719 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:06.719 net.core.busy_read = 1 00:23:06.719 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:06.719 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:06.719 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:06.719 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:06.719 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:06.719 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:06.719 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:06.719 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:06.719 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:06.719 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=419105 00:23:06.719 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 419105 00:23:06.719 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:06.719 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 419105 ']' 00:23:06.719 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.719 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.719 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.719 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.719 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:06.719 [2024-12-05 20:42:59.974804] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:23:06.719 [2024-12-05 20:42:59.974846] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.719 [2024-12-05 20:43:00.052835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:06.719 [2024-12-05 20:43:00.098333] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:06.719 [2024-12-05 20:43:00.098371] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:06.719 [2024-12-05 20:43:00.098377] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:06.719 [2024-12-05 20:43:00.098383] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:06.719 [2024-12-05 20:43:00.098391] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:06.719 [2024-12-05 20:43:00.099811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.719 [2024-12-05 20:43:00.099928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:06.719 [2024-12-05 20:43:00.100040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.719 [2024-12-05 20:43:00.100040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:07.657 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.658 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:23:07.658 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:07.658 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:07.658 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:07.658 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.658 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:23:07.658 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:07.658 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:07.658 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.658 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:07.658 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.658 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:07.658 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:07.658 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.658 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:07.658 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.658 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:07.658 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.658 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:07.658 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.658 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:07.658 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.658 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:07.658 [2024-12-05 20:43:00.963046] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.658 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.658 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:07.658 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.658 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:07.658 Malloc1 00:23:07.658 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.658 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:07.658 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.658 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:07.658 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.658 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:07.658 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.658 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:07.658 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.658 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:07.658 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.658 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:07.658 [2024-12-05 20:43:01.026297] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:07.658 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.658 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=419362 00:23:07.658 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:23:07.658 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:10.198 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:23:10.198 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.198 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.198 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.198 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:23:10.198 "tick_rate": 2200000000, 00:23:10.198 "poll_groups": [ 00:23:10.198 { 00:23:10.198 "name": "nvmf_tgt_poll_group_000", 00:23:10.198 "admin_qpairs": 1, 00:23:10.198 "io_qpairs": 3, 00:23:10.198 "current_admin_qpairs": 1, 00:23:10.198 "current_io_qpairs": 3, 00:23:10.198 "pending_bdev_io": 0, 00:23:10.198 "completed_nvme_io": 34189, 00:23:10.198 "transports": [ 00:23:10.198 { 00:23:10.198 "trtype": "TCP" 00:23:10.198 } 00:23:10.198 ] 00:23:10.198 }, 00:23:10.198 { 00:23:10.198 "name": "nvmf_tgt_poll_group_001", 00:23:10.198 "admin_qpairs": 0, 00:23:10.198 "io_qpairs": 1, 00:23:10.198 "current_admin_qpairs": 0, 00:23:10.198 "current_io_qpairs": 1, 00:23:10.198 "pending_bdev_io": 0, 00:23:10.198 "completed_nvme_io": 25457, 00:23:10.198 "transports": [ 00:23:10.198 { 00:23:10.198 "trtype": "TCP" 00:23:10.198 } 00:23:10.198 ] 00:23:10.198 }, 00:23:10.198 { 00:23:10.198 "name": "nvmf_tgt_poll_group_002", 00:23:10.198 "admin_qpairs": 0, 00:23:10.198 "io_qpairs": 0, 00:23:10.198 "current_admin_qpairs": 0, 00:23:10.198 "current_io_qpairs": 0, 00:23:10.198 "pending_bdev_io": 0, 00:23:10.198 "completed_nvme_io": 0, 00:23:10.198 "transports": [ 00:23:10.198 { 00:23:10.198 "trtype": "TCP" 00:23:10.198 } 00:23:10.198 ] 00:23:10.198 }, 00:23:10.198 { 00:23:10.198 "name": "nvmf_tgt_poll_group_003", 00:23:10.198 "admin_qpairs": 0, 00:23:10.198 "io_qpairs": 0, 00:23:10.198 "current_admin_qpairs": 0, 00:23:10.198 "current_io_qpairs": 0, 00:23:10.198 "pending_bdev_io": 0, 00:23:10.198 "completed_nvme_io": 0, 00:23:10.198 "transports": [ 00:23:10.198 { 00:23:10.198 "trtype": "TCP" 00:23:10.198 } 00:23:10.198 ] 00:23:10.198 } 00:23:10.198 ] 00:23:10.198 }' 00:23:10.198 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:10.198 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:23:10.198 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:23:10.198 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:23:10.198 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 419362 00:23:18.319 Initializing NVMe Controllers 00:23:18.319 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:18.319 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:18.319 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:18.319 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:18.319 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:18.319 Initialization complete. Launching workers. 00:23:18.319 ======================================================== 00:23:18.319 Latency(us) 00:23:18.319 Device Information : IOPS MiB/s Average min max 00:23:18.319 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5577.20 21.79 11492.36 1359.55 56509.37 00:23:18.319 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5827.20 22.76 10982.67 1192.73 57206.96 00:23:18.319 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5948.50 23.24 10765.02 1349.98 57289.70 00:23:18.319 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 15087.70 58.94 4241.37 1308.19 45357.97 00:23:18.319 ======================================================== 00:23:18.319 Total : 32440.59 126.72 7895.10 1192.73 57289.70 00:23:18.319 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:18.319 rmmod nvme_tcp 00:23:18.319 rmmod nvme_fabrics 00:23:18.319 rmmod nvme_keyring 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 419105 ']' 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 419105 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 419105 ']' 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 419105 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 419105 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 419105' 00:23:18.319 killing process with pid 419105 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 419105 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 419105 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:18.319 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.611 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:21.611 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:23:21.611 00:23:21.611 real 0m52.752s 00:23:21.611 user 2m49.638s 00:23:21.611 sys 0m11.328s 00:23:21.611 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:21.611 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:21.611 ************************************ 00:23:21.611 END TEST nvmf_perf_adq 00:23:21.611 ************************************ 00:23:21.611 20:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:21.611 20:43:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:21.611 20:43:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:21.611 20:43:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:21.611 ************************************ 00:23:21.611 START TEST nvmf_shutdown 00:23:21.611 ************************************ 00:23:21.611 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:21.611 * Looking for test storage... 00:23:21.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:21.611 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:21.611 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:23:21.611 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:21.611 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:21.611 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:21.611 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:21.611 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:21.611 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:21.611 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:21.611 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:21.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.612 --rc genhtml_branch_coverage=1 00:23:21.612 --rc genhtml_function_coverage=1 00:23:21.612 --rc genhtml_legend=1 00:23:21.612 --rc geninfo_all_blocks=1 00:23:21.612 --rc geninfo_unexecuted_blocks=1 00:23:21.612 00:23:21.612 ' 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:21.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.612 --rc genhtml_branch_coverage=1 00:23:21.612 --rc genhtml_function_coverage=1 00:23:21.612 --rc genhtml_legend=1 00:23:21.612 --rc geninfo_all_blocks=1 00:23:21.612 --rc geninfo_unexecuted_blocks=1 00:23:21.612 00:23:21.612 ' 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:21.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.612 --rc genhtml_branch_coverage=1 00:23:21.612 --rc genhtml_function_coverage=1 00:23:21.612 --rc genhtml_legend=1 00:23:21.612 --rc geninfo_all_blocks=1 00:23:21.612 --rc geninfo_unexecuted_blocks=1 00:23:21.612 00:23:21.612 ' 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:21.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.612 --rc genhtml_branch_coverage=1 00:23:21.612 --rc genhtml_function_coverage=1 00:23:21.612 --rc genhtml_legend=1 00:23:21.612 --rc geninfo_all_blocks=1 00:23:21.612 --rc geninfo_unexecuted_blocks=1 00:23:21.612 00:23:21.612 ' 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:21.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:21.612 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:21.613 ************************************ 00:23:21.613 START TEST nvmf_shutdown_tc1 00:23:21.613 ************************************ 00:23:21.613 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:23:21.613 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:23:21.613 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:21.613 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:21.613 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:21.613 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:21.613 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:21.613 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:21.613 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.613 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:21.613 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.613 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:21.613 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:21.613 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:21.613 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:28.191 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:28.191 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:28.191 Found net devices under 0000:af:00.0: cvl_0_0 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:28.191 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:28.192 Found net devices under 0000:af:00.1: cvl_0_1 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:28.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:28.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:23:28.192 00:23:28.192 --- 10.0.0.2 ping statistics --- 00:23:28.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.192 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:28.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:28.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:23:28.192 00:23:28.192 --- 10.0.0.1 ping statistics --- 00:23:28.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.192 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=425093 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 425093 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 425093 ']' 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:28.192 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:28.192 [2024-12-05 20:43:21.009765] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:23:28.192 [2024-12-05 20:43:21.009807] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.192 [2024-12-05 20:43:21.070914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:28.192 [2024-12-05 20:43:21.111149] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.192 [2024-12-05 20:43:21.111187] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.192 [2024-12-05 20:43:21.111195] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.192 [2024-12-05 20:43:21.111202] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.192 [2024-12-05 20:43:21.111207] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.192 [2024-12-05 20:43:21.112784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.192 [2024-12-05 20:43:21.112897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:28.192 [2024-12-05 20:43:21.112983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.192 [2024-12-05 20:43:21.112983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:28.192 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:28.192 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:28.192 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:28.192 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:28.192 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:28.192 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.192 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:28.192 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.192 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:28.192 [2024-12-05 20:43:21.257314] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.192 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.193 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:28.193 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:28.193 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:28.193 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:28.193 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:28.193 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:28.193 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:28.193 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:28.193 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:28.193 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:28.193 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:28.193 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:28.193 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:28.193 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:28.193 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:28.193 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:28.193 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:28.193 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:28.193 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:28.193 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:28.193 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:28.193 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:28.193 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:28.193 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:28.193 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:28.193 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:28.193 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.193 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:28.193 Malloc1 00:23:28.193 [2024-12-05 20:43:21.380387] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.193 Malloc2 00:23:28.193 Malloc3 00:23:28.193 Malloc4 00:23:28.193 Malloc5 00:23:28.193 Malloc6 00:23:28.193 Malloc7 00:23:28.453 Malloc8 00:23:28.453 Malloc9 00:23:28.453 Malloc10 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=425342 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 425342 /var/tmp/bdevperf.sock 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 425342 ']' 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:28.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:28.453 { 00:23:28.453 "params": { 00:23:28.453 "name": "Nvme$subsystem", 00:23:28.453 "trtype": "$TEST_TRANSPORT", 00:23:28.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.453 "adrfam": "ipv4", 00:23:28.453 "trsvcid": "$NVMF_PORT", 00:23:28.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.453 "hdgst": ${hdgst:-false}, 00:23:28.453 "ddgst": ${ddgst:-false} 00:23:28.453 }, 00:23:28.453 "method": "bdev_nvme_attach_controller" 00:23:28.453 } 00:23:28.453 EOF 00:23:28.453 )") 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:28.453 { 00:23:28.453 "params": { 00:23:28.453 "name": "Nvme$subsystem", 00:23:28.453 "trtype": "$TEST_TRANSPORT", 00:23:28.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.453 "adrfam": "ipv4", 00:23:28.453 "trsvcid": "$NVMF_PORT", 00:23:28.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.453 "hdgst": ${hdgst:-false}, 00:23:28.453 "ddgst": ${ddgst:-false} 00:23:28.453 }, 00:23:28.453 "method": "bdev_nvme_attach_controller" 00:23:28.453 } 00:23:28.453 EOF 00:23:28.453 )") 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:28.453 { 00:23:28.453 "params": { 00:23:28.453 "name": "Nvme$subsystem", 00:23:28.453 "trtype": "$TEST_TRANSPORT", 00:23:28.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.453 "adrfam": "ipv4", 00:23:28.453 "trsvcid": "$NVMF_PORT", 00:23:28.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.453 "hdgst": ${hdgst:-false}, 00:23:28.453 "ddgst": ${ddgst:-false} 00:23:28.453 }, 00:23:28.453 "method": "bdev_nvme_attach_controller" 00:23:28.453 } 00:23:28.453 EOF 00:23:28.453 )") 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:28.453 { 00:23:28.453 "params": { 00:23:28.453 "name": "Nvme$subsystem", 00:23:28.453 "trtype": "$TEST_TRANSPORT", 00:23:28.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.453 "adrfam": "ipv4", 00:23:28.453 "trsvcid": "$NVMF_PORT", 00:23:28.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.453 "hdgst": ${hdgst:-false}, 00:23:28.453 "ddgst": ${ddgst:-false} 00:23:28.453 }, 00:23:28.453 "method": "bdev_nvme_attach_controller" 00:23:28.453 } 00:23:28.453 EOF 00:23:28.453 )") 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:28.453 { 00:23:28.453 "params": { 00:23:28.453 "name": "Nvme$subsystem", 00:23:28.453 "trtype": "$TEST_TRANSPORT", 00:23:28.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.453 "adrfam": "ipv4", 00:23:28.453 "trsvcid": "$NVMF_PORT", 00:23:28.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.453 "hdgst": ${hdgst:-false}, 00:23:28.453 "ddgst": ${ddgst:-false} 00:23:28.453 }, 00:23:28.453 "method": "bdev_nvme_attach_controller" 00:23:28.453 } 00:23:28.453 EOF 00:23:28.453 )") 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:28.453 { 00:23:28.453 "params": { 00:23:28.453 "name": "Nvme$subsystem", 00:23:28.453 "trtype": "$TEST_TRANSPORT", 00:23:28.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.453 "adrfam": "ipv4", 00:23:28.453 "trsvcid": "$NVMF_PORT", 00:23:28.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.453 "hdgst": ${hdgst:-false}, 00:23:28.453 "ddgst": ${ddgst:-false} 00:23:28.453 }, 00:23:28.453 "method": "bdev_nvme_attach_controller" 00:23:28.453 } 00:23:28.453 EOF 00:23:28.453 )") 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:28.453 { 00:23:28.453 "params": { 00:23:28.453 "name": "Nvme$subsystem", 00:23:28.453 "trtype": "$TEST_TRANSPORT", 00:23:28.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.453 "adrfam": "ipv4", 00:23:28.453 "trsvcid": "$NVMF_PORT", 00:23:28.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.453 "hdgst": ${hdgst:-false}, 00:23:28.453 "ddgst": ${ddgst:-false} 00:23:28.453 }, 00:23:28.453 "method": "bdev_nvme_attach_controller" 00:23:28.453 } 00:23:28.453 EOF 00:23:28.453 )") 00:23:28.453 [2024-12-05 20:43:21.848815] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:23:28.453 [2024-12-05 20:43:21.848862] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:28.453 { 00:23:28.453 "params": { 00:23:28.453 "name": "Nvme$subsystem", 00:23:28.453 "trtype": "$TEST_TRANSPORT", 00:23:28.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.453 "adrfam": "ipv4", 00:23:28.453 "trsvcid": "$NVMF_PORT", 00:23:28.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.453 "hdgst": ${hdgst:-false}, 00:23:28.453 "ddgst": ${ddgst:-false} 00:23:28.453 }, 00:23:28.453 "method": "bdev_nvme_attach_controller" 00:23:28.453 } 00:23:28.453 EOF 00:23:28.453 )") 00:23:28.453 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:28.454 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:28.454 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:28.454 { 00:23:28.454 "params": { 00:23:28.454 "name": "Nvme$subsystem", 00:23:28.454 "trtype": "$TEST_TRANSPORT", 00:23:28.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.454 "adrfam": "ipv4", 00:23:28.454 "trsvcid": "$NVMF_PORT", 00:23:28.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.454 "hdgst": ${hdgst:-false}, 00:23:28.454 "ddgst": ${ddgst:-false} 00:23:28.454 }, 00:23:28.454 "method": "bdev_nvme_attach_controller" 00:23:28.454 } 00:23:28.454 EOF 00:23:28.454 )") 00:23:28.454 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:28.454 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:28.454 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:28.454 { 00:23:28.454 "params": { 00:23:28.454 "name": "Nvme$subsystem", 00:23:28.454 "trtype": "$TEST_TRANSPORT", 00:23:28.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.454 "adrfam": "ipv4", 00:23:28.454 "trsvcid": "$NVMF_PORT", 00:23:28.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.454 "hdgst": ${hdgst:-false}, 00:23:28.454 "ddgst": ${ddgst:-false} 00:23:28.454 }, 00:23:28.454 "method": "bdev_nvme_attach_controller" 00:23:28.454 } 00:23:28.454 EOF 00:23:28.454 )") 00:23:28.454 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:28.454 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:28.454 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:28.454 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:28.454 "params": { 00:23:28.454 "name": "Nvme1", 00:23:28.454 "trtype": "tcp", 00:23:28.454 "traddr": "10.0.0.2", 00:23:28.454 "adrfam": "ipv4", 00:23:28.454 "trsvcid": "4420", 00:23:28.454 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.454 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:28.454 "hdgst": false, 00:23:28.454 "ddgst": false 00:23:28.454 }, 00:23:28.454 "method": "bdev_nvme_attach_controller" 00:23:28.454 },{ 00:23:28.454 "params": { 00:23:28.454 "name": "Nvme2", 00:23:28.454 "trtype": "tcp", 00:23:28.454 "traddr": "10.0.0.2", 00:23:28.454 "adrfam": "ipv4", 00:23:28.454 "trsvcid": "4420", 00:23:28.454 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:28.454 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:28.454 "hdgst": false, 00:23:28.454 "ddgst": false 00:23:28.454 }, 00:23:28.454 "method": "bdev_nvme_attach_controller" 00:23:28.454 },{ 00:23:28.454 "params": { 00:23:28.454 "name": "Nvme3", 00:23:28.454 "trtype": "tcp", 00:23:28.454 "traddr": "10.0.0.2", 00:23:28.454 "adrfam": "ipv4", 00:23:28.454 "trsvcid": "4420", 00:23:28.454 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:28.454 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:28.454 "hdgst": false, 00:23:28.454 "ddgst": false 00:23:28.454 }, 00:23:28.454 "method": "bdev_nvme_attach_controller" 00:23:28.454 },{ 00:23:28.454 "params": { 00:23:28.454 "name": "Nvme4", 00:23:28.454 "trtype": "tcp", 00:23:28.454 "traddr": "10.0.0.2", 00:23:28.454 "adrfam": "ipv4", 00:23:28.454 "trsvcid": "4420", 00:23:28.454 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:28.454 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:28.454 "hdgst": false, 00:23:28.454 "ddgst": false 00:23:28.454 }, 00:23:28.454 "method": "bdev_nvme_attach_controller" 00:23:28.454 },{ 00:23:28.454 "params": { 00:23:28.454 "name": "Nvme5", 00:23:28.454 "trtype": "tcp", 00:23:28.454 "traddr": "10.0.0.2", 00:23:28.454 "adrfam": "ipv4", 00:23:28.454 "trsvcid": "4420", 00:23:28.454 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:28.454 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:28.454 "hdgst": false, 00:23:28.454 "ddgst": false 00:23:28.454 }, 00:23:28.454 "method": "bdev_nvme_attach_controller" 00:23:28.454 },{ 00:23:28.454 "params": { 00:23:28.454 "name": "Nvme6", 00:23:28.454 "trtype": "tcp", 00:23:28.454 "traddr": "10.0.0.2", 00:23:28.454 "adrfam": "ipv4", 00:23:28.454 "trsvcid": "4420", 00:23:28.454 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:28.454 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:28.454 "hdgst": false, 00:23:28.454 "ddgst": false 00:23:28.454 }, 00:23:28.454 "method": "bdev_nvme_attach_controller" 00:23:28.454 },{ 00:23:28.454 "params": { 00:23:28.454 "name": "Nvme7", 00:23:28.454 "trtype": "tcp", 00:23:28.454 "traddr": "10.0.0.2", 00:23:28.454 "adrfam": "ipv4", 00:23:28.454 "trsvcid": "4420", 00:23:28.454 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:28.454 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:28.454 "hdgst": false, 00:23:28.454 "ddgst": false 00:23:28.454 }, 00:23:28.454 "method": "bdev_nvme_attach_controller" 00:23:28.454 },{ 00:23:28.454 "params": { 00:23:28.454 "name": "Nvme8", 00:23:28.454 "trtype": "tcp", 00:23:28.454 "traddr": "10.0.0.2", 00:23:28.454 "adrfam": "ipv4", 00:23:28.454 "trsvcid": "4420", 00:23:28.454 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:28.454 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:28.454 "hdgst": false, 00:23:28.454 "ddgst": false 00:23:28.454 }, 00:23:28.454 "method": "bdev_nvme_attach_controller" 00:23:28.454 },{ 00:23:28.454 "params": { 00:23:28.454 "name": "Nvme9", 00:23:28.454 "trtype": "tcp", 00:23:28.454 "traddr": "10.0.0.2", 00:23:28.454 "adrfam": "ipv4", 00:23:28.454 "trsvcid": "4420", 00:23:28.454 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:28.454 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:28.454 "hdgst": false, 00:23:28.454 "ddgst": false 00:23:28.454 }, 00:23:28.454 "method": "bdev_nvme_attach_controller" 00:23:28.454 },{ 00:23:28.454 "params": { 00:23:28.454 "name": "Nvme10", 00:23:28.454 "trtype": "tcp", 00:23:28.454 "traddr": "10.0.0.2", 00:23:28.454 "adrfam": "ipv4", 00:23:28.454 "trsvcid": "4420", 00:23:28.454 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:28.454 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:28.454 "hdgst": false, 00:23:28.454 "ddgst": false 00:23:28.454 }, 00:23:28.454 "method": "bdev_nvme_attach_controller" 00:23:28.454 }' 00:23:28.715 [2024-12-05 20:43:21.923469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.715 [2024-12-05 20:43:21.961175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.096 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:30.096 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:30.096 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:30.096 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.096 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:30.096 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.096 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 425342 00:23:30.096 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:23:30.096 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:23:31.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 425342 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:31.033 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 425093 00:23:31.033 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:31.033 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:31.033 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:31.033 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:31.033 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:31.033 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:31.033 { 00:23:31.034 "params": { 00:23:31.034 "name": "Nvme$subsystem", 00:23:31.034 "trtype": "$TEST_TRANSPORT", 00:23:31.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.034 "adrfam": "ipv4", 00:23:31.034 "trsvcid": "$NVMF_PORT", 00:23:31.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.034 "hdgst": ${hdgst:-false}, 00:23:31.034 "ddgst": ${ddgst:-false} 00:23:31.034 }, 00:23:31.034 "method": "bdev_nvme_attach_controller" 00:23:31.034 } 00:23:31.034 EOF 00:23:31.034 )") 00:23:31.034 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:31.034 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:31.034 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:31.034 { 00:23:31.034 "params": { 00:23:31.034 "name": "Nvme$subsystem", 00:23:31.034 "trtype": "$TEST_TRANSPORT", 00:23:31.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.034 "adrfam": "ipv4", 00:23:31.034 "trsvcid": "$NVMF_PORT", 00:23:31.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.034 "hdgst": ${hdgst:-false}, 00:23:31.034 "ddgst": ${ddgst:-false} 00:23:31.034 }, 00:23:31.034 "method": "bdev_nvme_attach_controller" 00:23:31.034 } 00:23:31.034 EOF 00:23:31.034 )") 00:23:31.034 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:31.034 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:31.034 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:31.034 { 00:23:31.034 "params": { 00:23:31.034 "name": "Nvme$subsystem", 00:23:31.034 "trtype": "$TEST_TRANSPORT", 00:23:31.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.034 "adrfam": "ipv4", 00:23:31.034 "trsvcid": "$NVMF_PORT", 00:23:31.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.034 "hdgst": ${hdgst:-false}, 00:23:31.034 "ddgst": ${ddgst:-false} 00:23:31.034 }, 00:23:31.034 "method": "bdev_nvme_attach_controller" 00:23:31.034 } 00:23:31.034 EOF 00:23:31.034 )") 00:23:31.034 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:31.034 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:31.034 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:31.034 { 00:23:31.034 "params": { 00:23:31.034 "name": "Nvme$subsystem", 00:23:31.034 "trtype": "$TEST_TRANSPORT", 00:23:31.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.034 "adrfam": "ipv4", 00:23:31.034 "trsvcid": "$NVMF_PORT", 00:23:31.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.034 "hdgst": ${hdgst:-false}, 00:23:31.034 "ddgst": ${ddgst:-false} 00:23:31.034 }, 00:23:31.034 "method": "bdev_nvme_attach_controller" 00:23:31.034 } 00:23:31.034 EOF 00:23:31.034 )") 00:23:31.034 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:31.034 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:31.034 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:31.034 { 00:23:31.034 "params": { 00:23:31.034 "name": "Nvme$subsystem", 00:23:31.034 "trtype": "$TEST_TRANSPORT", 00:23:31.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.034 "adrfam": "ipv4", 00:23:31.034 "trsvcid": "$NVMF_PORT", 00:23:31.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.034 "hdgst": ${hdgst:-false}, 00:23:31.034 "ddgst": ${ddgst:-false} 00:23:31.034 }, 00:23:31.034 "method": "bdev_nvme_attach_controller" 00:23:31.034 } 00:23:31.034 EOF 00:23:31.034 )") 00:23:31.034 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:31.034 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:31.034 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:31.034 { 00:23:31.034 "params": { 00:23:31.034 "name": "Nvme$subsystem", 00:23:31.034 "trtype": "$TEST_TRANSPORT", 00:23:31.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.034 "adrfam": "ipv4", 00:23:31.034 "trsvcid": "$NVMF_PORT", 00:23:31.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.034 "hdgst": ${hdgst:-false}, 00:23:31.034 "ddgst": ${ddgst:-false} 00:23:31.034 }, 00:23:31.034 "method": "bdev_nvme_attach_controller" 00:23:31.034 } 00:23:31.034 EOF 00:23:31.034 )") 00:23:31.034 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:31.034 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:31.034 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:31.034 { 00:23:31.034 "params": { 00:23:31.034 "name": "Nvme$subsystem", 00:23:31.034 "trtype": "$TEST_TRANSPORT", 00:23:31.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.034 "adrfam": "ipv4", 00:23:31.034 "trsvcid": "$NVMF_PORT", 00:23:31.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.034 "hdgst": ${hdgst:-false}, 00:23:31.034 "ddgst": ${ddgst:-false} 00:23:31.034 }, 00:23:31.034 "method": "bdev_nvme_attach_controller" 00:23:31.034 } 00:23:31.034 EOF 00:23:31.034 )") 00:23:31.034 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:31.034 [2024-12-05 20:43:24.459066] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:23:31.034 [2024-12-05 20:43:24.459111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid425885 ] 00:23:31.034 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:31.034 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:31.034 { 00:23:31.034 "params": { 00:23:31.034 "name": "Nvme$subsystem", 00:23:31.034 "trtype": "$TEST_TRANSPORT", 00:23:31.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.034 "adrfam": "ipv4", 00:23:31.034 "trsvcid": "$NVMF_PORT", 00:23:31.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.034 "hdgst": ${hdgst:-false}, 00:23:31.034 "ddgst": ${ddgst:-false} 00:23:31.034 }, 00:23:31.034 "method": "bdev_nvme_attach_controller" 00:23:31.034 } 00:23:31.034 EOF 00:23:31.034 )") 00:23:31.034 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:31.034 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:31.034 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:31.034 { 00:23:31.034 "params": { 00:23:31.034 "name": "Nvme$subsystem", 00:23:31.034 "trtype": "$TEST_TRANSPORT", 00:23:31.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.034 "adrfam": "ipv4", 00:23:31.035 "trsvcid": "$NVMF_PORT", 00:23:31.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.035 "hdgst": ${hdgst:-false}, 00:23:31.035 "ddgst": ${ddgst:-false} 00:23:31.035 }, 00:23:31.035 "method": "bdev_nvme_attach_controller" 00:23:31.035 } 00:23:31.035 EOF 00:23:31.035 )") 00:23:31.035 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:31.295 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:31.295 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:31.295 { 00:23:31.295 "params": { 00:23:31.295 "name": "Nvme$subsystem", 00:23:31.295 "trtype": "$TEST_TRANSPORT", 00:23:31.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.295 "adrfam": "ipv4", 00:23:31.295 "trsvcid": "$NVMF_PORT", 00:23:31.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.295 "hdgst": ${hdgst:-false}, 00:23:31.295 "ddgst": ${ddgst:-false} 00:23:31.295 }, 00:23:31.295 "method": "bdev_nvme_attach_controller" 00:23:31.295 } 00:23:31.295 EOF 00:23:31.295 )") 00:23:31.295 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:31.295 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:31.295 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:31.295 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:31.295 "params": { 00:23:31.295 "name": "Nvme1", 00:23:31.295 "trtype": "tcp", 00:23:31.295 "traddr": "10.0.0.2", 00:23:31.295 "adrfam": "ipv4", 00:23:31.295 "trsvcid": "4420", 00:23:31.295 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.295 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:31.295 "hdgst": false, 00:23:31.295 "ddgst": false 00:23:31.295 }, 00:23:31.295 "method": "bdev_nvme_attach_controller" 00:23:31.295 },{ 00:23:31.295 "params": { 00:23:31.295 "name": "Nvme2", 00:23:31.295 "trtype": "tcp", 00:23:31.295 "traddr": "10.0.0.2", 00:23:31.295 "adrfam": "ipv4", 00:23:31.295 "trsvcid": "4420", 00:23:31.295 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:31.295 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:31.295 "hdgst": false, 00:23:31.295 "ddgst": false 00:23:31.295 }, 00:23:31.295 "method": "bdev_nvme_attach_controller" 00:23:31.295 },{ 00:23:31.295 "params": { 00:23:31.295 "name": "Nvme3", 00:23:31.295 "trtype": "tcp", 00:23:31.295 "traddr": "10.0.0.2", 00:23:31.295 "adrfam": "ipv4", 00:23:31.295 "trsvcid": "4420", 00:23:31.295 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:31.295 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:31.295 "hdgst": false, 00:23:31.295 "ddgst": false 00:23:31.295 }, 00:23:31.295 "method": "bdev_nvme_attach_controller" 00:23:31.295 },{ 00:23:31.295 "params": { 00:23:31.295 "name": "Nvme4", 00:23:31.295 "trtype": "tcp", 00:23:31.295 "traddr": "10.0.0.2", 00:23:31.295 "adrfam": "ipv4", 00:23:31.295 "trsvcid": "4420", 00:23:31.295 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:31.295 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:31.295 "hdgst": false, 00:23:31.295 "ddgst": false 00:23:31.295 }, 00:23:31.295 "method": "bdev_nvme_attach_controller" 00:23:31.295 },{ 00:23:31.295 "params": { 00:23:31.295 "name": "Nvme5", 00:23:31.295 "trtype": "tcp", 00:23:31.295 "traddr": "10.0.0.2", 00:23:31.295 "adrfam": "ipv4", 00:23:31.295 "trsvcid": "4420", 00:23:31.295 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:31.295 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:31.295 "hdgst": false, 00:23:31.295 "ddgst": false 00:23:31.295 }, 00:23:31.295 "method": "bdev_nvme_attach_controller" 00:23:31.295 },{ 00:23:31.295 "params": { 00:23:31.295 "name": "Nvme6", 00:23:31.295 "trtype": "tcp", 00:23:31.295 "traddr": "10.0.0.2", 00:23:31.295 "adrfam": "ipv4", 00:23:31.295 "trsvcid": "4420", 00:23:31.295 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:31.295 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:31.295 "hdgst": false, 00:23:31.295 "ddgst": false 00:23:31.295 }, 00:23:31.295 "method": "bdev_nvme_attach_controller" 00:23:31.295 },{ 00:23:31.295 "params": { 00:23:31.295 "name": "Nvme7", 00:23:31.295 "trtype": "tcp", 00:23:31.295 "traddr": "10.0.0.2", 00:23:31.295 "adrfam": "ipv4", 00:23:31.295 "trsvcid": "4420", 00:23:31.295 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:31.295 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:31.295 "hdgst": false, 00:23:31.295 "ddgst": false 00:23:31.295 }, 00:23:31.295 "method": "bdev_nvme_attach_controller" 00:23:31.295 },{ 00:23:31.295 "params": { 00:23:31.295 "name": "Nvme8", 00:23:31.295 "trtype": "tcp", 00:23:31.295 "traddr": "10.0.0.2", 00:23:31.295 "adrfam": "ipv4", 00:23:31.295 "trsvcid": "4420", 00:23:31.295 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:31.295 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:31.295 "hdgst": false, 00:23:31.295 "ddgst": false 00:23:31.295 }, 00:23:31.295 "method": "bdev_nvme_attach_controller" 00:23:31.295 },{ 00:23:31.295 "params": { 00:23:31.295 "name": "Nvme9", 00:23:31.295 "trtype": "tcp", 00:23:31.295 "traddr": "10.0.0.2", 00:23:31.295 "adrfam": "ipv4", 00:23:31.295 "trsvcid": "4420", 00:23:31.295 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:31.295 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:31.295 "hdgst": false, 00:23:31.295 "ddgst": false 00:23:31.295 }, 00:23:31.295 "method": "bdev_nvme_attach_controller" 00:23:31.295 },{ 00:23:31.295 "params": { 00:23:31.295 "name": "Nvme10", 00:23:31.295 "trtype": "tcp", 00:23:31.295 "traddr": "10.0.0.2", 00:23:31.295 "adrfam": "ipv4", 00:23:31.295 "trsvcid": "4420", 00:23:31.295 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:31.295 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:31.295 "hdgst": false, 00:23:31.295 "ddgst": false 00:23:31.295 }, 00:23:31.295 "method": "bdev_nvme_attach_controller" 00:23:31.295 }' 00:23:31.295 [2024-12-05 20:43:24.537094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.295 [2024-12-05 20:43:24.575785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.672 Running I/O for 1 seconds... 00:23:33.871 2505.00 IOPS, 156.56 MiB/s 00:23:33.871 Latency(us) 00:23:33.871 [2024-12-05T19:43:27.312Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.871 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:33.871 Verification LBA range: start 0x0 length 0x400 00:23:33.871 Nvme1n1 : 1.13 283.74 17.73 0.00 0.00 223662.64 15490.33 197322.94 00:23:33.871 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:33.871 Verification LBA range: start 0x0 length 0x400 00:23:33.871 Nvme2n1 : 1.06 307.64 19.23 0.00 0.00 202662.56 5749.29 192556.68 00:23:33.871 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:33.871 Verification LBA range: start 0x0 length 0x400 00:23:33.871 Nvme3n1 : 1.05 305.15 19.07 0.00 0.00 202105.95 15490.33 198276.19 00:23:33.871 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:33.871 Verification LBA range: start 0x0 length 0x400 00:23:33.871 Nvme4n1 : 1.06 306.32 19.15 0.00 0.00 197340.26 2159.71 183024.17 00:23:33.871 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:33.871 Verification LBA range: start 0x0 length 0x400 00:23:33.871 Nvme5n1 : 1.09 302.58 18.91 0.00 0.00 197854.55 2934.23 202089.19 00:23:33.871 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:33.871 Verification LBA range: start 0x0 length 0x400 00:23:33.871 Nvme6n1 : 1.11 287.87 17.99 0.00 0.00 206144.61 16681.89 200182.69 00:23:33.871 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:33.871 Verification LBA range: start 0x0 length 0x400 00:23:33.871 Nvme7n1 : 1.14 337.15 21.07 0.00 0.00 173977.76 8221.79 210668.45 00:23:33.871 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:33.871 Verification LBA range: start 0x0 length 0x400 00:23:33.871 Nvme8n1 : 1.15 335.00 20.94 0.00 0.00 172342.07 11021.96 197322.94 00:23:33.871 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:33.871 Verification LBA range: start 0x0 length 0x400 00:23:33.871 Nvme9n1 : 1.15 335.19 20.95 0.00 0.00 170153.81 8162.21 202089.19 00:23:33.871 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:33.871 Verification LBA range: start 0x0 length 0x400 00:23:33.871 Nvme10n1 : 1.14 285.37 17.84 0.00 0.00 196896.44 1638.40 218294.46 00:23:33.871 [2024-12-05T19:43:27.312Z] =================================================================================================================== 00:23:33.871 [2024-12-05T19:43:27.312Z] Total : 3086.00 192.88 0.00 0.00 193097.40 1638.40 218294.46 00:23:33.871 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:23:33.871 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:33.871 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:33.871 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:33.871 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:33.871 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:33.871 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:23:33.871 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:33.871 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:23:33.871 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:33.871 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:33.871 rmmod nvme_tcp 00:23:33.871 rmmod nvme_fabrics 00:23:33.871 rmmod nvme_keyring 00:23:34.131 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:34.131 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:23:34.131 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:23:34.131 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 425093 ']' 00:23:34.131 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 425093 00:23:34.131 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 425093 ']' 00:23:34.131 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 425093 00:23:34.131 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:23:34.131 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:34.131 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 425093 00:23:34.131 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:34.131 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:34.131 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 425093' 00:23:34.131 killing process with pid 425093 00:23:34.131 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 425093 00:23:34.131 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 425093 00:23:34.391 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:34.392 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:34.392 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:34.392 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:23:34.392 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:23:34.392 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:34.392 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:23:34.392 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:34.392 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:34.392 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.392 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:34.392 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.932 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:36.932 00:23:36.932 real 0m14.914s 00:23:36.932 user 0m31.960s 00:23:36.932 sys 0m5.870s 00:23:36.932 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:36.932 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:36.932 ************************************ 00:23:36.932 END TEST nvmf_shutdown_tc1 00:23:36.932 ************************************ 00:23:36.932 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:36.932 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:36.932 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:36.932 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:36.932 ************************************ 00:23:36.932 START TEST nvmf_shutdown_tc2 00:23:36.932 ************************************ 00:23:36.932 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:23:36.932 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:23:36.932 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:36.932 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:36.932 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:36.932 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:36.932 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:36.932 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:36.932 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.932 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:36.933 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:36.933 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.933 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:36.934 Found net devices under 0000:af:00.0: cvl_0_0 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:36.934 Found net devices under 0000:af:00.1: cvl_0_1 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:36.934 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:36.934 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:36.934 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:36.934 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:36.934 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:36.934 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:36.934 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:36.934 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:36.934 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:36.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:36.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.413 ms 00:23:36.934 00:23:36.934 --- 10.0.0.2 ping statistics --- 00:23:36.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.934 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:23:36.934 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:36.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:36.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:23:36.934 00:23:36.934 --- 10.0.0.1 ping statistics --- 00:23:36.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.934 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:23:36.934 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:36.934 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:23:36.934 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:36.934 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:36.934 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:36.934 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:36.934 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:36.934 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:36.934 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:36.934 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:36.934 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:36.934 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:36.934 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:36.934 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=427018 00:23:36.934 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 427018 00:23:36.934 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:36.934 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 427018 ']' 00:23:36.934 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.934 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:36.934 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.934 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:36.934 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:36.934 [2024-12-05 20:43:30.267174] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:23:36.934 [2024-12-05 20:43:30.267215] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.934 [2024-12-05 20:43:30.344265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:37.194 [2024-12-05 20:43:30.382591] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.194 [2024-12-05 20:43:30.382628] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.194 [2024-12-05 20:43:30.382634] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.194 [2024-12-05 20:43:30.382640] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.194 [2024-12-05 20:43:30.382644] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.194 [2024-12-05 20:43:30.384101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:37.194 [2024-12-05 20:43:30.384213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:37.194 [2024-12-05 20:43:30.384322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.194 [2024-12-05 20:43:30.384324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:37.768 [2024-12-05 20:43:31.118508] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.768 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:37.768 Malloc1 00:23:38.028 [2024-12-05 20:43:31.225291] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:38.028 Malloc2 00:23:38.028 Malloc3 00:23:38.028 Malloc4 00:23:38.028 Malloc5 00:23:38.028 Malloc6 00:23:38.028 Malloc7 00:23:38.289 Malloc8 00:23:38.289 Malloc9 00:23:38.289 Malloc10 00:23:38.289 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.289 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:38.289 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:38.289 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:38.289 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=427301 00:23:38.289 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 427301 /var/tmp/bdevperf.sock 00:23:38.289 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 427301 ']' 00:23:38.289 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:38.289 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:38.289 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:38.289 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:38.289 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:38.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:38.289 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:23:38.289 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:38.289 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:23:38.290 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:38.290 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.290 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.290 { 00:23:38.290 "params": { 00:23:38.290 "name": "Nvme$subsystem", 00:23:38.290 "trtype": "$TEST_TRANSPORT", 00:23:38.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.290 "adrfam": "ipv4", 00:23:38.290 "trsvcid": "$NVMF_PORT", 00:23:38.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.290 "hdgst": ${hdgst:-false}, 00:23:38.290 "ddgst": ${ddgst:-false} 00:23:38.290 }, 00:23:38.290 "method": "bdev_nvme_attach_controller" 00:23:38.290 } 00:23:38.290 EOF 00:23:38.290 )") 00:23:38.290 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:38.290 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.290 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.290 { 00:23:38.290 "params": { 00:23:38.290 "name": "Nvme$subsystem", 00:23:38.290 "trtype": "$TEST_TRANSPORT", 00:23:38.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.290 "adrfam": "ipv4", 00:23:38.290 "trsvcid": "$NVMF_PORT", 00:23:38.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.290 "hdgst": ${hdgst:-false}, 00:23:38.290 "ddgst": ${ddgst:-false} 00:23:38.290 }, 00:23:38.290 "method": "bdev_nvme_attach_controller" 00:23:38.290 } 00:23:38.290 EOF 00:23:38.290 )") 00:23:38.290 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:38.290 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.290 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.290 { 00:23:38.290 "params": { 00:23:38.290 "name": "Nvme$subsystem", 00:23:38.290 "trtype": "$TEST_TRANSPORT", 00:23:38.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.290 "adrfam": "ipv4", 00:23:38.290 "trsvcid": "$NVMF_PORT", 00:23:38.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.290 "hdgst": ${hdgst:-false}, 00:23:38.290 "ddgst": ${ddgst:-false} 00:23:38.290 }, 00:23:38.290 "method": "bdev_nvme_attach_controller" 00:23:38.290 } 00:23:38.290 EOF 00:23:38.290 )") 00:23:38.290 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:38.290 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.290 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.290 { 00:23:38.290 "params": { 00:23:38.290 "name": "Nvme$subsystem", 00:23:38.290 "trtype": "$TEST_TRANSPORT", 00:23:38.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.290 "adrfam": "ipv4", 00:23:38.290 "trsvcid": "$NVMF_PORT", 00:23:38.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.290 "hdgst": ${hdgst:-false}, 00:23:38.290 "ddgst": ${ddgst:-false} 00:23:38.290 }, 00:23:38.290 "method": "bdev_nvme_attach_controller" 00:23:38.290 } 00:23:38.290 EOF 00:23:38.290 )") 00:23:38.290 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:38.290 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.290 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.290 { 00:23:38.290 "params": { 00:23:38.290 "name": "Nvme$subsystem", 00:23:38.290 "trtype": "$TEST_TRANSPORT", 00:23:38.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.290 "adrfam": "ipv4", 00:23:38.290 "trsvcid": "$NVMF_PORT", 00:23:38.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.290 "hdgst": ${hdgst:-false}, 00:23:38.290 "ddgst": ${ddgst:-false} 00:23:38.290 }, 00:23:38.290 "method": "bdev_nvme_attach_controller" 00:23:38.290 } 00:23:38.290 EOF 00:23:38.290 )") 00:23:38.290 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:38.290 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.290 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.290 { 00:23:38.290 "params": { 00:23:38.290 "name": "Nvme$subsystem", 00:23:38.290 "trtype": "$TEST_TRANSPORT", 00:23:38.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.290 "adrfam": "ipv4", 00:23:38.290 "trsvcid": "$NVMF_PORT", 00:23:38.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.290 "hdgst": ${hdgst:-false}, 00:23:38.290 "ddgst": ${ddgst:-false} 00:23:38.290 }, 00:23:38.290 "method": "bdev_nvme_attach_controller" 00:23:38.290 } 00:23:38.290 EOF 00:23:38.290 )") 00:23:38.290 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:38.290 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.290 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.290 { 00:23:38.290 "params": { 00:23:38.290 "name": "Nvme$subsystem", 00:23:38.290 "trtype": "$TEST_TRANSPORT", 00:23:38.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.290 "adrfam": "ipv4", 00:23:38.290 "trsvcid": "$NVMF_PORT", 00:23:38.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.290 "hdgst": ${hdgst:-false}, 00:23:38.290 "ddgst": ${ddgst:-false} 00:23:38.290 }, 00:23:38.290 "method": "bdev_nvme_attach_controller" 00:23:38.290 } 00:23:38.290 EOF 00:23:38.290 )") 00:23:38.290 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:38.290 [2024-12-05 20:43:31.695413] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:23:38.290 [2024-12-05 20:43:31.695463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid427301 ] 00:23:38.290 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.290 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.290 { 00:23:38.290 "params": { 00:23:38.290 "name": "Nvme$subsystem", 00:23:38.290 "trtype": "$TEST_TRANSPORT", 00:23:38.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.290 "adrfam": "ipv4", 00:23:38.290 "trsvcid": "$NVMF_PORT", 00:23:38.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.290 "hdgst": ${hdgst:-false}, 00:23:38.290 "ddgst": ${ddgst:-false} 00:23:38.290 }, 00:23:38.290 "method": "bdev_nvme_attach_controller" 00:23:38.290 } 00:23:38.290 EOF 00:23:38.290 )") 00:23:38.290 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:38.290 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.290 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.290 { 00:23:38.290 "params": { 00:23:38.290 "name": "Nvme$subsystem", 00:23:38.290 "trtype": "$TEST_TRANSPORT", 00:23:38.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.290 "adrfam": "ipv4", 00:23:38.290 "trsvcid": "$NVMF_PORT", 00:23:38.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.290 "hdgst": ${hdgst:-false}, 00:23:38.290 "ddgst": ${ddgst:-false} 00:23:38.290 }, 00:23:38.290 "method": "bdev_nvme_attach_controller" 00:23:38.290 } 00:23:38.290 EOF 00:23:38.290 )") 00:23:38.290 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:38.290 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.290 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.290 { 00:23:38.290 "params": { 00:23:38.290 "name": "Nvme$subsystem", 00:23:38.290 "trtype": "$TEST_TRANSPORT", 00:23:38.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.290 "adrfam": "ipv4", 00:23:38.290 "trsvcid": "$NVMF_PORT", 00:23:38.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.290 "hdgst": ${hdgst:-false}, 00:23:38.290 "ddgst": ${ddgst:-false} 00:23:38.290 }, 00:23:38.290 "method": "bdev_nvme_attach_controller" 00:23:38.290 } 00:23:38.290 EOF 00:23:38.290 )") 00:23:38.291 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:38.291 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:23:38.291 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:23:38.291 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:38.291 "params": { 00:23:38.291 "name": "Nvme1", 00:23:38.291 "trtype": "tcp", 00:23:38.291 "traddr": "10.0.0.2", 00:23:38.291 "adrfam": "ipv4", 00:23:38.291 "trsvcid": "4420", 00:23:38.291 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.291 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:38.291 "hdgst": false, 00:23:38.291 "ddgst": false 00:23:38.291 }, 00:23:38.291 "method": "bdev_nvme_attach_controller" 00:23:38.291 },{ 00:23:38.291 "params": { 00:23:38.291 "name": "Nvme2", 00:23:38.291 "trtype": "tcp", 00:23:38.291 "traddr": "10.0.0.2", 00:23:38.291 "adrfam": "ipv4", 00:23:38.291 "trsvcid": "4420", 00:23:38.291 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:38.291 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:38.291 "hdgst": false, 00:23:38.291 "ddgst": false 00:23:38.291 }, 00:23:38.291 "method": "bdev_nvme_attach_controller" 00:23:38.291 },{ 00:23:38.291 "params": { 00:23:38.291 "name": "Nvme3", 00:23:38.291 "trtype": "tcp", 00:23:38.291 "traddr": "10.0.0.2", 00:23:38.291 "adrfam": "ipv4", 00:23:38.291 "trsvcid": "4420", 00:23:38.291 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:38.291 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:38.291 "hdgst": false, 00:23:38.291 "ddgst": false 00:23:38.291 }, 00:23:38.291 "method": "bdev_nvme_attach_controller" 00:23:38.291 },{ 00:23:38.291 "params": { 00:23:38.291 "name": "Nvme4", 00:23:38.291 "trtype": "tcp", 00:23:38.291 "traddr": "10.0.0.2", 00:23:38.291 "adrfam": "ipv4", 00:23:38.291 "trsvcid": "4420", 00:23:38.291 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:38.291 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:38.291 "hdgst": false, 00:23:38.291 "ddgst": false 00:23:38.291 }, 00:23:38.291 "method": "bdev_nvme_attach_controller" 00:23:38.291 },{ 00:23:38.291 "params": { 00:23:38.291 "name": "Nvme5", 00:23:38.291 "trtype": "tcp", 00:23:38.291 "traddr": "10.0.0.2", 00:23:38.291 "adrfam": "ipv4", 00:23:38.291 "trsvcid": "4420", 00:23:38.291 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:38.291 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:38.291 "hdgst": false, 00:23:38.291 "ddgst": false 00:23:38.291 }, 00:23:38.291 "method": "bdev_nvme_attach_controller" 00:23:38.291 },{ 00:23:38.291 "params": { 00:23:38.291 "name": "Nvme6", 00:23:38.291 "trtype": "tcp", 00:23:38.291 "traddr": "10.0.0.2", 00:23:38.291 "adrfam": "ipv4", 00:23:38.291 "trsvcid": "4420", 00:23:38.291 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:38.291 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:38.291 "hdgst": false, 00:23:38.291 "ddgst": false 00:23:38.291 }, 00:23:38.291 "method": "bdev_nvme_attach_controller" 00:23:38.291 },{ 00:23:38.291 "params": { 00:23:38.291 "name": "Nvme7", 00:23:38.291 "trtype": "tcp", 00:23:38.291 "traddr": "10.0.0.2", 00:23:38.291 "adrfam": "ipv4", 00:23:38.291 "trsvcid": "4420", 00:23:38.291 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:38.291 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:38.291 "hdgst": false, 00:23:38.291 "ddgst": false 00:23:38.291 }, 00:23:38.291 "method": "bdev_nvme_attach_controller" 00:23:38.291 },{ 00:23:38.291 "params": { 00:23:38.291 "name": "Nvme8", 00:23:38.291 "trtype": "tcp", 00:23:38.291 "traddr": "10.0.0.2", 00:23:38.291 "adrfam": "ipv4", 00:23:38.291 "trsvcid": "4420", 00:23:38.291 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:38.291 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:38.291 "hdgst": false, 00:23:38.291 "ddgst": false 00:23:38.291 }, 00:23:38.291 "method": "bdev_nvme_attach_controller" 00:23:38.291 },{ 00:23:38.291 "params": { 00:23:38.291 "name": "Nvme9", 00:23:38.291 "trtype": "tcp", 00:23:38.291 "traddr": "10.0.0.2", 00:23:38.291 "adrfam": "ipv4", 00:23:38.291 "trsvcid": "4420", 00:23:38.291 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:38.291 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:38.291 "hdgst": false, 00:23:38.291 "ddgst": false 00:23:38.291 }, 00:23:38.291 "method": "bdev_nvme_attach_controller" 00:23:38.291 },{ 00:23:38.291 "params": { 00:23:38.291 "name": "Nvme10", 00:23:38.291 "trtype": "tcp", 00:23:38.291 "traddr": "10.0.0.2", 00:23:38.291 "adrfam": "ipv4", 00:23:38.291 "trsvcid": "4420", 00:23:38.291 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:38.291 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:38.291 "hdgst": false, 00:23:38.291 "ddgst": false 00:23:38.291 }, 00:23:38.291 "method": "bdev_nvme_attach_controller" 00:23:38.291 }' 00:23:38.571 [2024-12-05 20:43:31.772083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.572 [2024-12-05 20:43:31.810188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.952 Running I/O for 10 seconds... 00:23:39.952 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:39.952 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:39.952 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:39.952 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.952 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:39.952 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.952 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:39.952 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:39.952 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:39.952 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:23:39.952 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:23:39.952 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:39.952 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:39.952 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:39.952 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:39.952 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.952 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:39.952 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.952 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:39.952 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:39.952 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:40.213 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:40.213 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:40.213 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:40.213 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:40.213 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.213 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:40.213 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.213 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:40.213 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:40.213 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:40.472 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:40.472 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:40.472 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:40.472 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:40.472 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.472 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:40.733 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.733 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:23:40.733 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:23:40.733 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:23:40.733 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:23:40.733 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:23:40.733 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 427301 00:23:40.733 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 427301 ']' 00:23:40.733 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 427301 00:23:40.733 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:40.733 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:40.733 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 427301 00:23:40.733 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:40.733 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:40.733 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 427301' 00:23:40.733 killing process with pid 427301 00:23:40.733 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 427301 00:23:40.733 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 427301 00:23:40.733 Received shutdown signal, test time was about 0.887923 seconds 00:23:40.733 00:23:40.733 Latency(us) 00:23:40.733 [2024-12-05T19:43:34.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.733 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:40.733 Verification LBA range: start 0x0 length 0x400 00:23:40.733 Nvme1n1 : 0.87 297.18 18.57 0.00 0.00 211854.63 5898.24 174444.92 00:23:40.733 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:40.733 Verification LBA range: start 0x0 length 0x400 00:23:40.733 Nvme2n1 : 0.88 296.37 18.52 0.00 0.00 209449.05 4498.15 196369.69 00:23:40.733 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:40.733 Verification LBA range: start 0x0 length 0x400 00:23:40.733 Nvme3n1 : 0.88 374.36 23.40 0.00 0.00 163098.84 5659.93 193509.93 00:23:40.733 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:40.733 Verification LBA range: start 0x0 length 0x400 00:23:40.733 Nvme4n1 : 0.87 298.91 18.68 0.00 0.00 200135.94 5362.04 183024.17 00:23:40.733 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:40.733 Verification LBA range: start 0x0 length 0x400 00:23:40.733 Nvme5n1 : 0.87 295.50 18.47 0.00 0.00 200162.68 14417.92 202089.19 00:23:40.733 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:40.733 Verification LBA range: start 0x0 length 0x400 00:23:40.733 Nvme6n1 : 0.88 327.28 20.45 0.00 0.00 173511.63 8698.41 180164.42 00:23:40.733 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:40.733 Verification LBA range: start 0x0 length 0x400 00:23:40.733 Nvme7n1 : 0.87 293.74 18.36 0.00 0.00 194093.38 14179.61 201135.94 00:23:40.733 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:40.733 Verification LBA range: start 0x0 length 0x400 00:23:40.733 Nvme8n1 : 0.86 304.90 19.06 0.00 0.00 182386.79 5332.25 200182.69 00:23:40.733 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:40.733 Verification LBA range: start 0x0 length 0x400 00:23:40.733 Nvme9n1 : 0.89 289.23 18.08 0.00 0.00 190516.83 22758.87 209715.20 00:23:40.733 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:40.733 Verification LBA range: start 0x0 length 0x400 00:23:40.733 Nvme10n1 : 0.89 288.51 18.03 0.00 0.00 187568.29 15132.86 224967.21 00:23:40.733 [2024-12-05T19:43:34.174Z] =================================================================================================================== 00:23:40.733 [2024-12-05T19:43:34.174Z] Total : 3065.97 191.62 0.00 0.00 190343.11 4498.15 224967.21 00:23:40.994 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:23:41.932 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 427018 00:23:41.932 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:23:41.932 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:41.932 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:41.932 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:41.932 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:41.932 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:41.932 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:23:41.932 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:41.932 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:23:41.932 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:41.932 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:41.932 rmmod nvme_tcp 00:23:41.932 rmmod nvme_fabrics 00:23:41.932 rmmod nvme_keyring 00:23:41.932 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:41.932 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:23:41.932 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:23:41.932 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 427018 ']' 00:23:41.932 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 427018 00:23:41.932 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 427018 ']' 00:23:41.932 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 427018 00:23:41.932 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:41.932 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:41.932 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 427018 00:23:41.932 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:41.932 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:41.932 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 427018' 00:23:41.932 killing process with pid 427018 00:23:41.932 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 427018 00:23:41.932 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 427018 00:23:42.501 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:42.501 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:42.501 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:42.501 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:23:42.501 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:23:42.501 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:42.501 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:23:42.501 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:42.501 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:42.501 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.501 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.501 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.410 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:44.410 00:23:44.410 real 0m7.861s 00:23:44.410 user 0m23.724s 00:23:44.410 sys 0m1.355s 00:23:44.411 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:44.411 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:44.411 ************************************ 00:23:44.411 END TEST nvmf_shutdown_tc2 00:23:44.411 ************************************ 00:23:44.411 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:44.411 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:44.411 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:44.411 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:44.411 ************************************ 00:23:44.411 START TEST nvmf_shutdown_tc3 00:23:44.411 ************************************ 00:23:44.411 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:23:44.411 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:23:44.411 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:44.411 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:44.411 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:44.411 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:44.411 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:44.411 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:44.411 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.411 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.411 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:44.673 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:44.673 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:44.673 Found net devices under 0000:af:00.0: cvl_0_0 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:44.673 Found net devices under 0000:af:00.1: cvl_0_1 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:44.673 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:44.674 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:44.674 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:44.674 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:44.674 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:44.674 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:44.674 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:44.674 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:44.674 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:44.674 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:44.674 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:44.674 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:44.674 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:44.674 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:44.674 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:44.674 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:44.674 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:44.674 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:44.674 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:44.674 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:44.674 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:44.674 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:44.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:44.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.467 ms 00:23:44.674 00:23:44.674 --- 10.0.0.2 ping statistics --- 00:23:44.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.674 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:23:44.674 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:44.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:44.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:23:44.934 00:23:44.934 --- 10.0.0.1 ping statistics --- 00:23:44.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.934 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:23:44.934 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:44.934 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:23:44.934 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:44.934 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:44.934 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:44.934 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:44.934 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:44.934 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:44.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:44.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:44.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:44.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:44.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:44.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=428551 00:23:44.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 428551 00:23:44.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:44.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 428551 ']' 00:23:44.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:44.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:44.935 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:44.935 [2024-12-05 20:43:38.219802] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:23:44.935 [2024-12-05 20:43:38.219849] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.935 [2024-12-05 20:43:38.295664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:44.935 [2024-12-05 20:43:38.335544] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.935 [2024-12-05 20:43:38.335578] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.935 [2024-12-05 20:43:38.335584] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.935 [2024-12-05 20:43:38.335589] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.935 [2024-12-05 20:43:38.335594] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.935 [2024-12-05 20:43:38.337217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.935 [2024-12-05 20:43:38.337329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:44.935 [2024-12-05 20:43:38.337437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.935 [2024-12-05 20:43:38.337439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:45.874 [2024-12-05 20:43:39.083932] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.874 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:45.874 Malloc1 00:23:45.874 [2024-12-05 20:43:39.187341] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:45.874 Malloc2 00:23:45.874 Malloc3 00:23:45.874 Malloc4 00:23:46.148 Malloc5 00:23:46.148 Malloc6 00:23:46.148 Malloc7 00:23:46.148 Malloc8 00:23:46.148 Malloc9 00:23:46.148 Malloc10 00:23:46.148 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.148 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:46.148 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:46.148 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:46.409 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=428862 00:23:46.409 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 428862 /var/tmp/bdevperf.sock 00:23:46.409 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 428862 ']' 00:23:46.409 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:46.409 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:46.409 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:46.409 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:46.409 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:46.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:46.409 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:23:46.409 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:46.409 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:23:46.409 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:46.409 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:46.409 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:46.409 { 00:23:46.409 "params": { 00:23:46.409 "name": "Nvme$subsystem", 00:23:46.409 "trtype": "$TEST_TRANSPORT", 00:23:46.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.409 "adrfam": "ipv4", 00:23:46.409 "trsvcid": "$NVMF_PORT", 00:23:46.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.409 "hdgst": ${hdgst:-false}, 00:23:46.409 "ddgst": ${ddgst:-false} 00:23:46.409 }, 00:23:46.409 "method": "bdev_nvme_attach_controller" 00:23:46.409 } 00:23:46.409 EOF 00:23:46.409 )") 00:23:46.409 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:46.409 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:46.410 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:46.410 { 00:23:46.410 "params": { 00:23:46.410 "name": "Nvme$subsystem", 00:23:46.410 "trtype": "$TEST_TRANSPORT", 00:23:46.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.410 "adrfam": "ipv4", 00:23:46.410 "trsvcid": "$NVMF_PORT", 00:23:46.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.410 "hdgst": ${hdgst:-false}, 00:23:46.410 "ddgst": ${ddgst:-false} 00:23:46.410 }, 00:23:46.410 "method": "bdev_nvme_attach_controller" 00:23:46.410 } 00:23:46.410 EOF 00:23:46.410 )") 00:23:46.410 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:46.410 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:46.410 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:46.410 { 00:23:46.410 "params": { 00:23:46.410 "name": "Nvme$subsystem", 00:23:46.410 "trtype": "$TEST_TRANSPORT", 00:23:46.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.410 "adrfam": "ipv4", 00:23:46.410 "trsvcid": "$NVMF_PORT", 00:23:46.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.410 "hdgst": ${hdgst:-false}, 00:23:46.410 "ddgst": ${ddgst:-false} 00:23:46.410 }, 00:23:46.410 "method": "bdev_nvme_attach_controller" 00:23:46.410 } 00:23:46.410 EOF 00:23:46.410 )") 00:23:46.410 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:46.410 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:46.410 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:46.410 { 00:23:46.410 "params": { 00:23:46.410 "name": "Nvme$subsystem", 00:23:46.410 "trtype": "$TEST_TRANSPORT", 00:23:46.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.410 "adrfam": "ipv4", 00:23:46.410 "trsvcid": "$NVMF_PORT", 00:23:46.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.410 "hdgst": ${hdgst:-false}, 00:23:46.410 "ddgst": ${ddgst:-false} 00:23:46.410 }, 00:23:46.410 "method": "bdev_nvme_attach_controller" 00:23:46.410 } 00:23:46.410 EOF 00:23:46.410 )") 00:23:46.410 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:46.410 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:46.410 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:46.410 { 00:23:46.410 "params": { 00:23:46.410 "name": "Nvme$subsystem", 00:23:46.410 "trtype": "$TEST_TRANSPORT", 00:23:46.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.410 "adrfam": "ipv4", 00:23:46.410 "trsvcid": "$NVMF_PORT", 00:23:46.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.410 "hdgst": ${hdgst:-false}, 00:23:46.410 "ddgst": ${ddgst:-false} 00:23:46.410 }, 00:23:46.410 "method": "bdev_nvme_attach_controller" 00:23:46.410 } 00:23:46.410 EOF 00:23:46.410 )") 00:23:46.410 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:46.410 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:46.410 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:46.410 { 00:23:46.410 "params": { 00:23:46.410 "name": "Nvme$subsystem", 00:23:46.410 "trtype": "$TEST_TRANSPORT", 00:23:46.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.410 "adrfam": "ipv4", 00:23:46.410 "trsvcid": "$NVMF_PORT", 00:23:46.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.410 "hdgst": ${hdgst:-false}, 00:23:46.410 "ddgst": ${ddgst:-false} 00:23:46.410 }, 00:23:46.410 "method": "bdev_nvme_attach_controller" 00:23:46.410 } 00:23:46.410 EOF 00:23:46.410 )") 00:23:46.410 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:46.410 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:46.410 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:46.410 { 00:23:46.410 "params": { 00:23:46.410 "name": "Nvme$subsystem", 00:23:46.410 "trtype": "$TEST_TRANSPORT", 00:23:46.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.410 "adrfam": "ipv4", 00:23:46.410 "trsvcid": "$NVMF_PORT", 00:23:46.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.410 "hdgst": ${hdgst:-false}, 00:23:46.410 "ddgst": ${ddgst:-false} 00:23:46.410 }, 00:23:46.410 "method": "bdev_nvme_attach_controller" 00:23:46.410 } 00:23:46.410 EOF 00:23:46.410 )") 00:23:46.410 [2024-12-05 20:43:39.656216] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:23:46.410 [2024-12-05 20:43:39.656263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid428862 ] 00:23:46.410 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:46.410 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:46.410 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:46.410 { 00:23:46.410 "params": { 00:23:46.410 "name": "Nvme$subsystem", 00:23:46.410 "trtype": "$TEST_TRANSPORT", 00:23:46.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.410 "adrfam": "ipv4", 00:23:46.410 "trsvcid": "$NVMF_PORT", 00:23:46.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.410 "hdgst": ${hdgst:-false}, 00:23:46.410 "ddgst": ${ddgst:-false} 00:23:46.410 }, 00:23:46.410 "method": "bdev_nvme_attach_controller" 00:23:46.410 } 00:23:46.410 EOF 00:23:46.410 )") 00:23:46.410 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:46.410 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:46.410 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:46.410 { 00:23:46.410 "params": { 00:23:46.410 "name": "Nvme$subsystem", 00:23:46.410 "trtype": "$TEST_TRANSPORT", 00:23:46.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.410 "adrfam": "ipv4", 00:23:46.410 "trsvcid": "$NVMF_PORT", 00:23:46.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.410 "hdgst": ${hdgst:-false}, 00:23:46.410 "ddgst": ${ddgst:-false} 00:23:46.410 }, 00:23:46.410 "method": "bdev_nvme_attach_controller" 00:23:46.410 } 00:23:46.410 EOF 00:23:46.410 )") 00:23:46.410 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:46.410 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:46.410 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:46.410 { 00:23:46.410 "params": { 00:23:46.410 "name": "Nvme$subsystem", 00:23:46.410 "trtype": "$TEST_TRANSPORT", 00:23:46.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.410 "adrfam": "ipv4", 00:23:46.410 "trsvcid": "$NVMF_PORT", 00:23:46.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.410 "hdgst": ${hdgst:-false}, 00:23:46.410 "ddgst": ${ddgst:-false} 00:23:46.410 }, 00:23:46.410 "method": "bdev_nvme_attach_controller" 00:23:46.410 } 00:23:46.410 EOF 00:23:46.410 )") 00:23:46.410 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:46.410 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:23:46.410 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:23:46.410 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:46.410 "params": { 00:23:46.410 "name": "Nvme1", 00:23:46.410 "trtype": "tcp", 00:23:46.410 "traddr": "10.0.0.2", 00:23:46.410 "adrfam": "ipv4", 00:23:46.410 "trsvcid": "4420", 00:23:46.410 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.410 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:46.410 "hdgst": false, 00:23:46.410 "ddgst": false 00:23:46.410 }, 00:23:46.410 "method": "bdev_nvme_attach_controller" 00:23:46.410 },{ 00:23:46.410 "params": { 00:23:46.410 "name": "Nvme2", 00:23:46.410 "trtype": "tcp", 00:23:46.410 "traddr": "10.0.0.2", 00:23:46.410 "adrfam": "ipv4", 00:23:46.410 "trsvcid": "4420", 00:23:46.410 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:46.410 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:46.410 "hdgst": false, 00:23:46.410 "ddgst": false 00:23:46.411 }, 00:23:46.411 "method": "bdev_nvme_attach_controller" 00:23:46.411 },{ 00:23:46.411 "params": { 00:23:46.411 "name": "Nvme3", 00:23:46.411 "trtype": "tcp", 00:23:46.411 "traddr": "10.0.0.2", 00:23:46.411 "adrfam": "ipv4", 00:23:46.411 "trsvcid": "4420", 00:23:46.411 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:46.411 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:46.411 "hdgst": false, 00:23:46.411 "ddgst": false 00:23:46.411 }, 00:23:46.411 "method": "bdev_nvme_attach_controller" 00:23:46.411 },{ 00:23:46.411 "params": { 00:23:46.411 "name": "Nvme4", 00:23:46.411 "trtype": "tcp", 00:23:46.411 "traddr": "10.0.0.2", 00:23:46.411 "adrfam": "ipv4", 00:23:46.411 "trsvcid": "4420", 00:23:46.411 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:46.411 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:46.411 "hdgst": false, 00:23:46.411 "ddgst": false 00:23:46.411 }, 00:23:46.411 "method": "bdev_nvme_attach_controller" 00:23:46.411 },{ 00:23:46.411 "params": { 00:23:46.411 "name": "Nvme5", 00:23:46.411 "trtype": "tcp", 00:23:46.411 "traddr": "10.0.0.2", 00:23:46.411 "adrfam": "ipv4", 00:23:46.411 "trsvcid": "4420", 00:23:46.411 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:46.411 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:46.411 "hdgst": false, 00:23:46.411 "ddgst": false 00:23:46.411 }, 00:23:46.411 "method": "bdev_nvme_attach_controller" 00:23:46.411 },{ 00:23:46.411 "params": { 00:23:46.411 "name": "Nvme6", 00:23:46.411 "trtype": "tcp", 00:23:46.411 "traddr": "10.0.0.2", 00:23:46.411 "adrfam": "ipv4", 00:23:46.411 "trsvcid": "4420", 00:23:46.411 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:46.411 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:46.411 "hdgst": false, 00:23:46.411 "ddgst": false 00:23:46.411 }, 00:23:46.411 "method": "bdev_nvme_attach_controller" 00:23:46.411 },{ 00:23:46.411 "params": { 00:23:46.411 "name": "Nvme7", 00:23:46.411 "trtype": "tcp", 00:23:46.411 "traddr": "10.0.0.2", 00:23:46.411 "adrfam": "ipv4", 00:23:46.411 "trsvcid": "4420", 00:23:46.411 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:46.411 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:46.411 "hdgst": false, 00:23:46.411 "ddgst": false 00:23:46.411 }, 00:23:46.411 "method": "bdev_nvme_attach_controller" 00:23:46.411 },{ 00:23:46.411 "params": { 00:23:46.411 "name": "Nvme8", 00:23:46.411 "trtype": "tcp", 00:23:46.411 "traddr": "10.0.0.2", 00:23:46.411 "adrfam": "ipv4", 00:23:46.411 "trsvcid": "4420", 00:23:46.411 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:46.411 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:46.411 "hdgst": false, 00:23:46.411 "ddgst": false 00:23:46.411 }, 00:23:46.411 "method": "bdev_nvme_attach_controller" 00:23:46.411 },{ 00:23:46.411 "params": { 00:23:46.411 "name": "Nvme9", 00:23:46.411 "trtype": "tcp", 00:23:46.411 "traddr": "10.0.0.2", 00:23:46.411 "adrfam": "ipv4", 00:23:46.411 "trsvcid": "4420", 00:23:46.411 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:46.411 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:46.411 "hdgst": false, 00:23:46.411 "ddgst": false 00:23:46.411 }, 00:23:46.411 "method": "bdev_nvme_attach_controller" 00:23:46.411 },{ 00:23:46.411 "params": { 00:23:46.411 "name": "Nvme10", 00:23:46.411 "trtype": "tcp", 00:23:46.411 "traddr": "10.0.0.2", 00:23:46.411 "adrfam": "ipv4", 00:23:46.411 "trsvcid": "4420", 00:23:46.411 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:46.411 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:46.411 "hdgst": false, 00:23:46.411 "ddgst": false 00:23:46.411 }, 00:23:46.411 "method": "bdev_nvme_attach_controller" 00:23:46.411 }' 00:23:46.411 [2024-12-05 20:43:39.729543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.411 [2024-12-05 20:43:39.767288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.313 Running I/O for 10 seconds... 00:23:48.880 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:48.880 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:48.880 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:48.880 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.880 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:48.880 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.880 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:48.880 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:48.880 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:48.880 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:48.880 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:48.880 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:48.880 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:48.880 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:48.880 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:48.880 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:48.880 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.880 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:48.880 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.880 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:48.880 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:48.880 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:49.140 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:49.140 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:49.140 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:49.140 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.140 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:49.140 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:49.140 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.140 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:23:49.140 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:23:49.140 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:49.140 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:49.140 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:49.140 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 428551 00:23:49.140 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 428551 ']' 00:23:49.140 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 428551 00:23:49.140 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:23:49.140 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:49.140 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 428551 00:23:49.415 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:49.415 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:49.415 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 428551' 00:23:49.415 killing process with pid 428551 00:23:49.415 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 428551 00:23:49.415 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 428551 00:23:49.415 [2024-12-05 20:43:42.609970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.415 [2024-12-05 20:43:42.610392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.610397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbde90 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.416 [2024-12-05 20:43:42.611087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.416 [2024-12-05 20:43:42.611096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.416 [2024-12-05 20:43:42.611103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.416 [2024-12-05 20:43:42.611110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.416 [2024-12-05 20:43:42.611116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.416 [2024-12-05 20:43:42.611123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.416 [2024-12-05 20:43:42.611133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.416 [2024-12-05 20:43:42.611140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19212e0 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.611796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0760 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.613577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.613596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.613606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.613612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.613618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.416 [2024-12-05 20:43:42.613623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.613959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe360 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.615217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.615242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.615250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.615256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.615262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.615268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.615273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.615280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.615289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.615295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.615301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.615307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.615312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.615318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.615324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.615329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.615339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.615346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.615351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.615357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.615363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.417 [2024-12-05 20:43:42.615369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.615613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe830 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.418 [2024-12-05 20:43:42.616665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.616671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.616676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.616682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.616687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.616693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.616699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.616704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.616710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.616715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.616720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.616726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.616731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.616736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.616743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.616750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.616756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.616762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.616768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.616773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.616779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.616784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.616789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.616795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbed20 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.419 [2024-12-05 20:43:42.617663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.617668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.617674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.617680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.617685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.617691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.617696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.617701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.617706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.617711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.617718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.617723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.617729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.617734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.617739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0a0 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.618441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf570 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.618460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf570 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.618467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf570 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.618474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf570 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.619812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.619825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.619831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.619837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.619843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.619849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.619855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.619860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.619868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.619873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.619879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.619884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.619890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.619896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.619901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.619907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.619912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.619918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.619923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.619928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.619934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.619940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.619946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.619953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.619959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.619965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.619970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.619976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.619981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.619987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.619994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.620000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.620006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.620016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.620022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.620028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.620034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.620039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.620045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.620052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.620057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.620071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.620077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.620082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.620088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.620093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.620099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.620105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.620112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.620118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.620124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.620129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.620134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.620140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.620145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.620151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.620156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.620162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.620169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.620175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.620180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.420 [2024-12-05 20:43:42.620186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.421 [2024-12-05 20:43:42.620193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbff30 is same with the state(6) to be set 00:23:49.421 [2024-12-05 20:43:42.620726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.421 [2024-12-05 20:43:42.620739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.421 [2024-12-05 20:43:42.620745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.421 [2024-12-05 20:43:42.620750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.421 [2024-12-05 20:43:42.620757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.421 [2024-12-05 20:43:42.620762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.421 [2024-12-05 20:43:42.620768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.421 [2024-12-05 20:43:42.620773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.421 [2024-12-05 20:43:42.620778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.421 [2024-12-05 20:43:42.620784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.421 [2024-12-05 20:43:42.620789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.421 [2024-12-05 20:43:42.620795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.421 [2024-12-05 20:43:42.620800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.421 [2024-12-05 20:43:42.620806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.421 [2024-12-05 20:43:42.620811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.421 [2024-12-05 20:43:42.620816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.421 [2024-12-05 20:43:42.620821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.421 [2024-12-05 20:43:42.627865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.421 [2024-12-05 20:43:42.627887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.421 [2024-12-05 20:43:42.627895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.421 [2024-12-05 20:43:42.627902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.421 [2024-12-05 20:43:42.627909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.421 [2024-12-05 20:43:42.627916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.421 [2024-12-05 20:43:42.627922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.421 [2024-12-05 20:43:42.627929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.421 [2024-12-05 20:43:42.627935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835610 is same with the state(6) to be set 00:23:49.421 [2024-12-05 20:43:42.627970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.421 [2024-12-05 20:43:42.627978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.421 [2024-12-05 20:43:42.627985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.421 [2024-12-05 20:43:42.627991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.421 [2024-12-05 20:43:42.627998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.421 [2024-12-05 20:43:42.628004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.421 [2024-12-05 20:43:42.628011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.421 [2024-12-05 20:43:42.628017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.421 [2024-12-05 20:43:42.628022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d415d0 is same with the state(6) to be set 00:23:49.421 [2024-12-05 20:43:42.628044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.421 [2024-12-05 20:43:42.628052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.421 [2024-12-05 20:43:42.628065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.421 [2024-12-05 20:43:42.628072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.421 [2024-12-05 20:43:42.628079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.421 [2024-12-05 20:43:42.628085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.421 [2024-12-05 20:43:42.628091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.421 [2024-12-05 20:43:42.628097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.421 [2024-12-05 20:43:42.628103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1920440 is same with the state(6) to be set 00:23:49.421 [2024-12-05 20:43:42.628124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.421 [2024-12-05 20:43:42.628132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.421 [2024-12-05 20:43:42.628138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.421 [2024-12-05 20:43:42.628145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.421 [2024-12-05 20:43:42.628152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.421 [2024-12-05 20:43:42.628158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.421 [2024-12-05 20:43:42.628166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.421 [2024-12-05 20:43:42.628175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.421 [2024-12-05 20:43:42.628181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19167a0 is same with the state(6) to be set 00:23:49.421 [2024-12-05 20:43:42.628205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.421 [2024-12-05 20:43:42.628213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.421 [2024-12-05 20:43:42.628219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.421 [2024-12-05 20:43:42.628226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.421 [2024-12-05 20:43:42.628233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.421 [2024-12-05 20:43:42.628239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.421 [2024-12-05 20:43:42.628246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.421 [2024-12-05 20:43:42.628251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.421 [2024-12-05 20:43:42.628257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4c410 is same with the state(6) to be set 00:23:49.421 [2024-12-05 20:43:42.628278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.421 [2024-12-05 20:43:42.628286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.421 [2024-12-05 20:43:42.628293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.421 [2024-12-05 20:43:42.628299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.421 [2024-12-05 20:43:42.628306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.421 [2024-12-05 20:43:42.628313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.421 [2024-12-05 20:43:42.628320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.421 [2024-12-05 20:43:42.628328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.421 [2024-12-05 20:43:42.628334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7b3b0 is same with the state(6) to be set 00:23:49.421 [2024-12-05 20:43:42.628357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.421 [2024-12-05 20:43:42.628364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.421 [2024-12-05 20:43:42.628371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.421 [2024-12-05 20:43:42.628378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.421 [2024-12-05 20:43:42.628385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.421 [2024-12-05 20:43:42.628391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.422 [2024-12-05 20:43:42.628400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.422 [2024-12-05 20:43:42.628406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.422 [2024-12-05 20:43:42.628412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8ec40 is same with the state(6) to be set 00:23:49.422 [2024-12-05 20:43:42.628434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.422 [2024-12-05 20:43:42.628444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.422 [2024-12-05 20:43:42.628451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.422 [2024-12-05 20:43:42.628457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.422 [2024-12-05 20:43:42.628464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.422 [2024-12-05 20:43:42.628470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.422 [2024-12-05 20:43:42.628477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.422 [2024-12-05 20:43:42.628483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.422 [2024-12-05 20:43:42.628489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915320 is same with the state(6) to be set 00:23:49.422 [2024-12-05 20:43:42.628503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19212e0 (9): Bad file descriptor 00:23:49.422 [2024-12-05 20:43:42.629141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.422 [2024-12-05 20:43:42.629160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.422 [2024-12-05 20:43:42.629175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.422 [2024-12-05 20:43:42.629181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.422 [2024-12-05 20:43:42.629190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.422 [2024-12-05 20:43:42.629196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.422 [2024-12-05 20:43:42.629204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.422 [2024-12-05 20:43:42.629210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.422 [2024-12-05 20:43:42.629218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.422 [2024-12-05 20:43:42.629224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.422 [2024-12-05 20:43:42.629232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.422 [2024-12-05 20:43:42.629238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.422 [2024-12-05 20:43:42.629249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.422 [2024-12-05 20:43:42.629256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.422 [2024-12-05 20:43:42.629264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.422 [2024-12-05 20:43:42.629270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.422 [2024-12-05 20:43:42.629278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.422 [2024-12-05 20:43:42.629284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.422 [2024-12-05 20:43:42.629292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.422 [2024-12-05 20:43:42.629298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.422 [2024-12-05 20:43:42.629306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.422 [2024-12-05 20:43:42.629312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.422 [2024-12-05 20:43:42.629320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.422 [2024-12-05 20:43:42.629327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.422 [2024-12-05 20:43:42.629334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.422 [2024-12-05 20:43:42.629341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.422 [2024-12-05 20:43:42.629349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.422 [2024-12-05 20:43:42.629356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.422 [2024-12-05 20:43:42.629364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.422 [2024-12-05 20:43:42.629370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.422 [2024-12-05 20:43:42.629377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.422 [2024-12-05 20:43:42.629384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.422 [2024-12-05 20:43:42.629392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.422 [2024-12-05 20:43:42.629399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.422 [2024-12-05 20:43:42.629406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.422 [2024-12-05 20:43:42.629412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.422 [2024-12-05 20:43:42.629421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.422 [2024-12-05 20:43:42.629429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.422 [2024-12-05 20:43:42.629436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.422 [2024-12-05 20:43:42.629442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.422 [2024-12-05 20:43:42.629450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.422 [2024-12-05 20:43:42.629456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.422 [2024-12-05 20:43:42.629464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.422 [2024-12-05 20:43:42.629465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.422 [2024-12-05 20:43:42.629471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.422 [2024-12-05 20:43:42.629476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.422 [2024-12-05 20:43:42.629480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.422 [2024-12-05 20:43:42.629484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.422 [2024-12-05 20:43:42.629488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.422 [2024-12-05 20:43:42.629493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.422 [2024-12-05 20:43:42.629497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.422 [2024-12-05 20:43:42.629500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.422 [2024-12-05 20:43:42.629505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.422 [2024-12-05 20:43:42.629507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.422 [2024-12-05 20:43:42.629514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:1[2024-12-05 20:43:42.629514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.422 the state(6) to be set 00:23:49.422 [2024-12-05 20:43:42.629524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with [2024-12-05 20:43:42.629524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:49.422 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.422 [2024-12-05 20:43:42.629534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.422 [2024-12-05 20:43:42.629537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.422 [2024-12-05 20:43:42.629541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.422 [2024-12-05 20:43:42.629544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.423 [2024-12-05 20:43:42.629548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.423 [2024-12-05 20:43:42.629559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.423 [2024-12-05 20:43:42.629566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.423 [2024-12-05 20:43:42.629574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.423 [2024-12-05 20:43:42.629581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.423 [2024-12-05 20:43:42.629588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.423 [2024-12-05 20:43:42.629597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.423 [2024-12-05 20:43:42.629605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.423 [2024-12-05 20:43:42.629612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.423 [2024-12-05 20:43:42.629620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.423 [2024-12-05 20:43:42.629627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.423 [2024-12-05 20:43:42.629635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.423 [2024-12-05 20:43:42.629643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.423 [2024-12-05 20:43:42.629650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.423 [2024-12-05 20:43:42.629659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.423 [2024-12-05 20:43:42.629667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.423 [2024-12-05 20:43:42.629675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.423 [2024-12-05 20:43:42.629682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.423 [2024-12-05 20:43:42.629689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.423 [2024-12-05 20:43:42.629697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.423 [2024-12-05 20:43:42.629704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.423 [2024-12-05 20:43:42.629711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.423 [2024-12-05 20:43:42.629719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.423 [2024-12-05 20:43:42.629726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.423 [2024-12-05 20:43:42.629734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with [2024-12-05 20:43:42.629742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:1the state(6) to be set 00:23:49.423 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.423 [2024-12-05 20:43:42.629749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with [2024-12-05 20:43:42.629750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:49.423 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.423 [2024-12-05 20:43:42.629759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.423 [2024-12-05 20:43:42.629767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.423 [2024-12-05 20:43:42.629775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.423 [2024-12-05 20:43:42.629782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.423 [2024-12-05 20:43:42.629789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.423 [2024-12-05 20:43:42.629796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.423 [2024-12-05 20:43:42.629804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.423 [2024-12-05 20:43:42.629811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.423 [2024-12-05 20:43:42.629818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0290 is same with the state(6) to be set 00:23:49.423 [2024-12-05 20:43:42.629825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.423 [2024-12-05 20:43:42.629834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.423 [2024-12-05 20:43:42.629841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.423 [2024-12-05 20:43:42.629848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.423 [2024-12-05 20:43:42.629855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.423 [2024-12-05 20:43:42.629861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.423 [2024-12-05 20:43:42.629870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.629876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.629884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.629891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.629899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.629906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.629913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.629919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.629926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.629933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.629941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.629946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.629954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.629960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.629968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.629974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.629981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.629987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.629995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.630002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.630010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.630016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.630023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.630029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.630037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.630043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.630050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.630057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.630071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.630078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.630085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.630091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.630098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.630105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.630112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.630118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.630139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:49.424 [2024-12-05 20:43:42.630215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.630223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.630232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.630239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.630246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.630253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.630260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.630266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.630273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.630280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.630287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.630293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.630300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.630306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.630314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.630321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.630331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.630337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.630344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.630350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.630357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.630363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.630371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.630379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.630386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.630392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.630400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.630405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.630414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.630420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.424 [2024-12-05 20:43:42.630428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.424 [2024-12-05 20:43:42.630433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.630441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.630448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.630455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.630461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.630468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.630474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.630482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.630488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.630496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.630503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.630510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.630516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.630524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.630529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.630537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.630543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.630550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.630556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.630563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.630569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.630577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.630583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.630590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.630597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.630604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.630611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.630618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.630624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.630631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.630639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.630646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.630652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.630659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.630665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.630673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.630679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.630687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.630693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.630700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.630706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.630714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.630720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.630727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.630733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.639164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.639176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.639187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.639194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.639203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.639210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.639220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.639227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.639236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.639244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.639253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.639261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.639271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.639278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.639287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.639298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.639306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.639316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.639325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.639332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.639343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.639350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.639359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.639367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.639376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.639383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.639393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.639400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.639409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.425 [2024-12-05 20:43:42.639417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.425 [2024-12-05 20:43:42.639426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.426 [2024-12-05 20:43:42.639433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.639443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.426 [2024-12-05 20:43:42.639450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.639459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.426 [2024-12-05 20:43:42.639467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.639476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.426 [2024-12-05 20:43:42.639483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.639492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.426 [2024-12-05 20:43:42.639500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.639511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.426 [2024-12-05 20:43:42.639518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.639527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.426 [2024-12-05 20:43:42.639535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.639544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.426 [2024-12-05 20:43:42.639551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.639560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.426 [2024-12-05 20:43:42.639567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.639577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.426 [2024-12-05 20:43:42.639584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.639593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.426 [2024-12-05 20:43:42.639601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.639609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a22190 is same with the state(6) to be set 00:23:49.426 [2024-12-05 20:43:42.639854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1835610 (9): Bad file descriptor 00:23:49.426 [2024-12-05 20:43:42.639878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d415d0 (9): Bad file descriptor 00:23:49.426 [2024-12-05 20:43:42.639893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1920440 (9): Bad file descriptor 00:23:49.426 [2024-12-05 20:43:42.639908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19167a0 (9): Bad file descriptor 00:23:49.426 [2024-12-05 20:43:42.639924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4c410 (9): Bad file descriptor 00:23:49.426 [2024-12-05 20:43:42.639941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7b3b0 (9): Bad file descriptor 00:23:49.426 [2024-12-05 20:43:42.639954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8ec40 (9): Bad file descriptor 00:23:49.426 [2024-12-05 20:43:42.639967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1915320 (9): Bad file descriptor 00:23:49.426 [2024-12-05 20:43:42.640003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.426 [2024-12-05 20:43:42.640014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.640023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.426 [2024-12-05 20:43:42.640030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.640038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.426 [2024-12-05 20:43:42.640051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.640067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.426 [2024-12-05 20:43:42.640075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.640083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d75290 is same with the state(6) to be set 00:23:49.426 [2024-12-05 20:43:42.640134] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:49.426 [2024-12-05 20:43:42.642422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.426 [2024-12-05 20:43:42.642446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.642460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.426 [2024-12-05 20:43:42.642470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.642480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.426 [2024-12-05 20:43:42.642488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.642498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.426 [2024-12-05 20:43:42.642506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.642515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.426 [2024-12-05 20:43:42.642523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.642533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.426 [2024-12-05 20:43:42.642541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.642550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.426 [2024-12-05 20:43:42.642558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.642567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.426 [2024-12-05 20:43:42.642574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.642584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.426 [2024-12-05 20:43:42.642591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.642600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.426 [2024-12-05 20:43:42.642608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.642624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.426 [2024-12-05 20:43:42.642633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.642643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.426 [2024-12-05 20:43:42.642650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.642660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.426 [2024-12-05 20:43:42.642667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.642676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.426 [2024-12-05 20:43:42.642684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.642694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.426 [2024-12-05 20:43:42.642701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.642711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.426 [2024-12-05 20:43:42.642718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.642727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.426 [2024-12-05 20:43:42.642736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.426 [2024-12-05 20:43:42.642744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.426 [2024-12-05 20:43:42.642753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.642762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.642770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.642779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.642787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.642795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.642803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.642812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.642819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.642828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.642838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.642847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.642855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.642864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.642871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.642881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.642889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.642898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.642905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.642915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.642923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.642933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.642940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.642950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.642957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.642966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.642974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.642983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.642990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.643000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.643007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.643016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.643023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.643032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.643039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.643051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.643064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.643074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.643081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.643090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.643097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.643106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.643114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.643123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.643130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.643140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.643147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.643156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.643164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.643174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.643183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.643193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.643201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.643209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.643217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.643227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.643234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.643243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.643250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.643260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.643269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.643278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.643286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.643295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.643302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.643312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.643319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.643329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.643337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.643345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.643352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.643362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.643369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.643378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.427 [2024-12-05 20:43:42.643386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.427 [2024-12-05 20:43:42.643394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.428 [2024-12-05 20:43:42.643402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.428 [2024-12-05 20:43:42.643411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.428 [2024-12-05 20:43:42.643418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.428 [2024-12-05 20:43:42.643427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.428 [2024-12-05 20:43:42.643437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.428 [2024-12-05 20:43:42.643446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.428 [2024-12-05 20:43:42.643454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.428 [2024-12-05 20:43:42.643463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.428 [2024-12-05 20:43:42.643470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.428 [2024-12-05 20:43:42.643481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.428 [2024-12-05 20:43:42.643488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.428 [2024-12-05 20:43:42.643497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.428 [2024-12-05 20:43:42.643504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.428 [2024-12-05 20:43:42.643513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.428 [2024-12-05 20:43:42.643520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.428 [2024-12-05 20:43:42.643529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.428 [2024-12-05 20:43:42.643536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.428 [2024-12-05 20:43:42.643544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b25270 is same with the state(6) to be set 00:23:49.428 [2024-12-05 20:43:42.644929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:49.428 [2024-12-05 20:43:42.644964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:49.428 [2024-12-05 20:43:42.644975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:49.428 [2024-12-05 20:43:42.645716] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:49.428 [2024-12-05 20:43:42.645808] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:49.428 [2024-12-05 20:43:42.645996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.428 [2024-12-05 20:43:42.646014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d415d0 with addr=10.0.0.2, port=4420 00:23:49.428 [2024-12-05 20:43:42.646024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d415d0 is same with the state(6) to be set 00:23:49.428 [2024-12-05 20:43:42.646176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.428 [2024-12-05 20:43:42.646189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19212e0 with addr=10.0.0.2, port=4420 00:23:49.428 [2024-12-05 20:43:42.646197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19212e0 is same with the state(6) to be set 00:23:49.428 [2024-12-05 20:43:42.646358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.428 [2024-12-05 20:43:42.646371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1835610 with addr=10.0.0.2, port=4420 00:23:49.428 [2024-12-05 20:43:42.646380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835610 is same with the state(6) to be set 00:23:49.428 [2024-12-05 20:43:42.646663] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:49.428 [2024-12-05 20:43:42.646711] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:49.428 [2024-12-05 20:43:42.646753] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:49.428 [2024-12-05 20:43:42.646840] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:49.428 [2024-12-05 20:43:42.646876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d415d0 (9): Bad file descriptor 00:23:49.428 [2024-12-05 20:43:42.646890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19212e0 (9): Bad file descriptor 00:23:49.428 [2024-12-05 20:43:42.646900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1835610 (9): Bad file descriptor 00:23:49.428 [2024-12-05 20:43:42.646968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.428 [2024-12-05 20:43:42.646982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.428 [2024-12-05 20:43:42.646997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.428 [2024-12-05 20:43:42.647005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.428 [2024-12-05 20:43:42.647016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.428 [2024-12-05 20:43:42.647023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.428 [2024-12-05 20:43:42.647033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.428 [2024-12-05 20:43:42.647041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.428 [2024-12-05 20:43:42.647051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.428 [2024-12-05 20:43:42.647066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.428 [2024-12-05 20:43:42.647076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.428 [2024-12-05 20:43:42.647084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.428 [2024-12-05 20:43:42.647093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.428 [2024-12-05 20:43:42.647101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.428 [2024-12-05 20:43:42.647111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.428 [2024-12-05 20:43:42.647118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.428 [2024-12-05 20:43:42.647127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.428 [2024-12-05 20:43:42.647135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.428 [2024-12-05 20:43:42.647144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.428 [2024-12-05 20:43:42.647152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.428 [2024-12-05 20:43:42.647161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.428 [2024-12-05 20:43:42.647169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.428 [2024-12-05 20:43:42.647178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.428 [2024-12-05 20:43:42.647186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.428 [2024-12-05 20:43:42.647195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.428 [2024-12-05 20:43:42.647206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.428 [2024-12-05 20:43:42.647215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.428 [2024-12-05 20:43:42.647223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.428 [2024-12-05 20:43:42.647232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.428 [2024-12-05 20:43:42.647241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.428 [2024-12-05 20:43:42.647250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.428 [2024-12-05 20:43:42.647258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.428 [2024-12-05 20:43:42.647267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.428 [2024-12-05 20:43:42.647275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.428 [2024-12-05 20:43:42.647284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.428 [2024-12-05 20:43:42.647292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.428 [2024-12-05 20:43:42.647301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.428 [2024-12-05 20:43:42.647309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.428 [2024-12-05 20:43:42.647318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.428 [2024-12-05 20:43:42.647326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.429 [2024-12-05 20:43:42.647907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.429 [2024-12-05 20:43:42.647916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.430 [2024-12-05 20:43:42.647924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.430 [2024-12-05 20:43:42.647933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.430 [2024-12-05 20:43:42.647940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.430 [2024-12-05 20:43:42.647949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.430 [2024-12-05 20:43:42.647957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.430 [2024-12-05 20:43:42.647966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.430 [2024-12-05 20:43:42.647974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.430 [2024-12-05 20:43:42.647983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.430 [2024-12-05 20:43:42.647990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.430 [2024-12-05 20:43:42.647999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.430 [2024-12-05 20:43:42.648007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.430 [2024-12-05 20:43:42.648016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.430 [2024-12-05 20:43:42.648023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.430 [2024-12-05 20:43:42.648033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.430 [2024-12-05 20:43:42.648041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.430 [2024-12-05 20:43:42.648049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.430 [2024-12-05 20:43:42.648062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.430 [2024-12-05 20:43:42.648072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.430 [2024-12-05 20:43:42.648080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.430 [2024-12-05 20:43:42.648089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2587090 is same with the state(6) to be set 00:23:49.430 [2024-12-05 20:43:42.648195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:49.430 [2024-12-05 20:43:42.648205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:49.430 [2024-12-05 20:43:42.648214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:49.430 [2024-12-05 20:43:42.648222] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:49.430 [2024-12-05 20:43:42.648232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:49.430 [2024-12-05 20:43:42.648239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:49.430 [2024-12-05 20:43:42.648246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:49.430 [2024-12-05 20:43:42.648254] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:49.430 [2024-12-05 20:43:42.648262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:49.430 [2024-12-05 20:43:42.648269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:49.430 [2024-12-05 20:43:42.648276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:49.430 [2024-12-05 20:43:42.648283] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:49.430 [2024-12-05 20:43:42.649267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:49.430 [2024-12-05 20:43:42.649597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.430 [2024-12-05 20:43:42.649615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d4c410 with addr=10.0.0.2, port=4420 00:23:49.430 [2024-12-05 20:43:42.649625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4c410 is same with the state(6) to be set 00:23:49.430 [2024-12-05 20:43:42.649893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4c410 (9): Bad file descriptor 00:23:49.430 [2024-12-05 20:43:42.649937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d75290 (9): Bad file descriptor 00:23:49.430 [2024-12-05 20:43:42.650006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:49.430 [2024-12-05 20:43:42.650016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:49.430 [2024-12-05 20:43:42.650025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:49.430 [2024-12-05 20:43:42.650032] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:49.430 [2024-12-05 20:43:42.650077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.430 [2024-12-05 20:43:42.650088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.430 [2024-12-05 20:43:42.650105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.430 [2024-12-05 20:43:42.650114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.430 [2024-12-05 20:43:42.650124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.430 [2024-12-05 20:43:42.650133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.430 [2024-12-05 20:43:42.650142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.430 [2024-12-05 20:43:42.650150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.430 [2024-12-05 20:43:42.650160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.430 [2024-12-05 20:43:42.650169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.430 [2024-12-05 20:43:42.650179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.430 [2024-12-05 20:43:42.650187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.430 [2024-12-05 20:43:42.650197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.430 [2024-12-05 20:43:42.650205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.430 [2024-12-05 20:43:42.650214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.430 [2024-12-05 20:43:42.650222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.430 [2024-12-05 20:43:42.650231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.430 [2024-12-05 20:43:42.650239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.430 [2024-12-05 20:43:42.650248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.430 [2024-12-05 20:43:42.650256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.430 [2024-12-05 20:43:42.650265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.430 [2024-12-05 20:43:42.650273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.430 [2024-12-05 20:43:42.650283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.430 [2024-12-05 20:43:42.650290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.430 [2024-12-05 20:43:42.650299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.430 [2024-12-05 20:43:42.650307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.430 [2024-12-05 20:43:42.650317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.430 [2024-12-05 20:43:42.650326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.430 [2024-12-05 20:43:42.650337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.430 [2024-12-05 20:43:42.650344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.430 [2024-12-05 20:43:42.650354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.430 [2024-12-05 20:43:42.650361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.430 [2024-12-05 20:43:42.650371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.430 [2024-12-05 20:43:42.650378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.430 [2024-12-05 20:43:42.650388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.650986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.650994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.651003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.651010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.651021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.651029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.651039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.431 [2024-12-05 20:43:42.651046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.431 [2024-12-05 20:43:42.651056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.651067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.651077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.651084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.651093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.651101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.651110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.651117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.651127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.651134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.651143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.651151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.651160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.651167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.651176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.651184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.651193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.651202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.651210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b26220 is same with the state(6) to be set 00:23:49.432 [2024-12-05 20:43:42.652274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.652288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.652301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.652309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.652319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.652328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.652338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.652345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.652355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.652363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.652373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.652381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.652390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.652399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.652407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.652416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.652425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.652433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.652441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.652449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.652458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.652466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.652475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.652485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.652494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.652502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.652511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.652519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.652528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.652536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.652545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.652553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.652562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.652570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.652579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.652587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.652596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.652604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.652614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.652621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.652631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.652639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.652648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.652656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.652665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.652673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.652682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.652690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.652699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.652709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.652718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.652727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.652736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.652744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.652754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.432 [2024-12-05 20:43:42.652762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.432 [2024-12-05 20:43:42.652772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.652780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.652790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.652798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.652808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.652815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.652825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.652833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.652843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.652851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.652861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.652869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.652878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.652885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.652895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.652904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.652914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.652922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.652932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.652940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.652950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.652957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.652966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.652974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.652983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.652991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.653000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.653008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.653017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.653025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.653034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.653042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.653051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.653066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.653075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.653082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.653092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.653099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.653108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.653116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.653124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.653132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.653141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.653151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.653160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.653168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.653178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.653186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.653195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.653203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.653212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.653219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.653228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.653236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.653244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.653252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.653261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.653269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.653278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.653285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.653294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.653301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.653311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.653318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.653327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.653335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.433 [2024-12-05 20:43:42.653343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.433 [2024-12-05 20:43:42.653351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.653361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.653369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.653378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.653386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.653393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0ef90 is same with the state(6) to be set 00:23:49.434 [2024-12-05 20:43:42.654607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.654624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.654637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.654645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.654656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.654665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.654675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.654683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.654693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.654701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.654711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.654719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.654728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.654736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.654745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.654753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.654762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.654770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.654779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.654787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.654799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.654807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.654816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.654824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.654834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.654842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.654851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.654859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.654869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.654876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.654885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.654893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.654903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.654911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.654921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.654929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.654939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.654946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.654956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.654963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.654972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.654980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.654990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.654997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.655007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.655017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.655026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.655034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.655044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.655052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.655067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.655075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.655084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.655092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.655101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.655109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.655119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.655127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.655136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.655144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.655154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.655162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.655173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.655180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.655191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.655199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.655209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.655216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.434 [2024-12-05 20:43:42.655228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.434 [2024-12-05 20:43:42.655237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.655248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.655256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.655265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.655273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.655283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.655291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.655300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.655308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.655317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.655324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.655333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.655341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.655350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.655357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.655366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.655373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.655382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.655390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.655399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.655406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.655415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.655423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.655431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.655439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.655448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.655457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.655466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.655474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.655483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.655490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.655499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.655507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.655517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.655524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.655534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.655541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.655550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.655557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.655566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.655573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.655583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.655590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.655599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.655607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.655616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.655623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.655632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.655639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.655648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.655656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.655667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.655675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.655683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.655691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.655700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.655707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.655716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.655724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.655732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1be90 is same with the state(6) to be set 00:23:49.435 [2024-12-05 20:43:42.656798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.656813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.656825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.656833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.656843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.656851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.656862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.656870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.656879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.656887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.656896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.656904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.656913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.656921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.435 [2024-12-05 20:43:42.656930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.435 [2024-12-05 20:43:42.656938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.656949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.656958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.656968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.656976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.656986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.656994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.436 [2024-12-05 20:43:42.657541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.436 [2024-12-05 20:43:42.657552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.657559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.657566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.657574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.657580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.657588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.657594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.657602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.657608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.657616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.657622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.657631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.657637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.657646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.657652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.657660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.657667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.657674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.657681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.657689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.657695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.657703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.657709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.657716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.657723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.657733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.657739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.657747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.657753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.657760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.657767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.657775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.657782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.657790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.657796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.657804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.657811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.657818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2c6fba0 is same with the state(6) to be set 00:23:49.437 [2024-12-05 20:43:42.658719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.658733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.658743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.658750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.658759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.658766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.658775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.658781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.658790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.658796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.658805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.658811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.658820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.658828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.658837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.658843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.658851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.658858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.658866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.658873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.658880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.658888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.658897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.658903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.658912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.658919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.658926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.658933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.658941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.658948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.658956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.658962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.658970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.658976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.658985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.658992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.437 [2024-12-05 20:43:42.659000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.437 [2024-12-05 20:43:42.659006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.438 [2024-12-05 20:43:42.659512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.438 [2024-12-05 20:43:42.659519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.439 [2024-12-05 20:43:42.659527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.439 [2024-12-05 20:43:42.659533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.439 [2024-12-05 20:43:42.659541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.439 [2024-12-05 20:43:42.659547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.439 [2024-12-05 20:43:42.659555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.439 [2024-12-05 20:43:42.659561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.439 [2024-12-05 20:43:42.659569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.439 [2024-12-05 20:43:42.659578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.439 [2024-12-05 20:43:42.659585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.439 [2024-12-05 20:43:42.659592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.439 [2024-12-05 20:43:42.659600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.439 [2024-12-05 20:43:42.659606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.439 [2024-12-05 20:43:42.659614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.439 [2024-12-05 20:43:42.659620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.439 [2024-12-05 20:43:42.659627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.439 [2024-12-05 20:43:42.659634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.439 [2024-12-05 20:43:42.659641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.439 [2024-12-05 20:43:42.659648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.439 [2024-12-05 20:43:42.659655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.439 [2024-12-05 20:43:42.659663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.439 [2024-12-05 20:43:42.659670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.439 [2024-12-05 20:43:42.659677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.439 [2024-12-05 20:43:42.659684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba4ae0 is same with the state(6) to be set 00:23:49.439 [2024-12-05 20:43:42.660578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:49.439 [2024-12-05 20:43:42.660596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:49.439 [2024-12-05 20:43:42.660607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:49.439 [2024-12-05 20:43:42.660618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:49.439 [2024-12-05 20:43:42.660680] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:49.439 [2024-12-05 20:43:42.660742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:49.439 [2024-12-05 20:43:42.660944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.439 [2024-12-05 20:43:42.660958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1915320 with addr=10.0.0.2, port=4420 00:23:49.439 [2024-12-05 20:43:42.660967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915320 is same with the state(6) to be set 00:23:49.439 [2024-12-05 20:43:42.661198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.439 [2024-12-05 20:43:42.661210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1920440 with addr=10.0.0.2, port=4420 00:23:49.439 [2024-12-05 20:43:42.661221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1920440 is same with the state(6) to be set 00:23:49.439 [2024-12-05 20:43:42.661346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.439 [2024-12-05 20:43:42.661356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19167a0 with addr=10.0.0.2, port=4420 00:23:49.439 [2024-12-05 20:43:42.661363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19167a0 is same with the state(6) to be set 00:23:49.439 [2024-12-05 20:43:42.661485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.439 [2024-12-05 20:43:42.661496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d8ec40 with addr=10.0.0.2, port=4420 00:23:49.439 [2024-12-05 20:43:42.661503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8ec40 is same with the state(6) to be set 00:23:49.439 [2024-12-05 20:43:42.662355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.439 [2024-12-05 20:43:42.662370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.439 [2024-12-05 20:43:42.662381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.439 [2024-12-05 20:43:42.662388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.439 [2024-12-05 20:43:42.662397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.439 [2024-12-05 20:43:42.662405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.439 [2024-12-05 20:43:42.662414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.439 [2024-12-05 20:43:42.662421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.439 [2024-12-05 20:43:42.662429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.439 [2024-12-05 20:43:42.662437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.439 [2024-12-05 20:43:42.662445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.439 [2024-12-05 20:43:42.662453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.439 [2024-12-05 20:43:42.662461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.439 [2024-12-05 20:43:42.662468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.439 [2024-12-05 20:43:42.662475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.439 [2024-12-05 20:43:42.662482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.439 [2024-12-05 20:43:42.662490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.439 [2024-12-05 20:43:42.662497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.439 [2024-12-05 20:43:42.662505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.439 [2024-12-05 20:43:42.662515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.439 [2024-12-05 20:43:42.662523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.439 [2024-12-05 20:43:42.662529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.439 [2024-12-05 20:43:42.662538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.439 [2024-12-05 20:43:42.662545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.439 [2024-12-05 20:43:42.662553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.439 [2024-12-05 20:43:42.662560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.439 [2024-12-05 20:43:42.662568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.439 [2024-12-05 20:43:42.662574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.439 [2024-12-05 20:43:42.662583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.439 [2024-12-05 20:43:42.662589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.439 [2024-12-05 20:43:42.662597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.439 [2024-12-05 20:43:42.662604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.439 [2024-12-05 20:43:42.662613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.439 [2024-12-05 20:43:42.662620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.439 [2024-12-05 20:43:42.662628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.439 [2024-12-05 20:43:42.662635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.439 [2024-12-05 20:43:42.662642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.662649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.662657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.662664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.662672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.662679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.662686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.662693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.662702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.662710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.662718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.662724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.662732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.662740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.662748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.662756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.662764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.662772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.662781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.662787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.662796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.662803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.662812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.662819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.662828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.662835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.662843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.662849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.662858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.662864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.662872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.662878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.662886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.662893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.662902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.662908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.662915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.662922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.662929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.662936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.662944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.662952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.662960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.662966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.662974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.662980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.662988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.662994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.663002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.663008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.663016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.663023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.663030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.663036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.663046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.663052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.663064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.663071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.663080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.663087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.663095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.663102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.663110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.663117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.663125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.663131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.663139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.663145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.663153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.663159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.663167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.663173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.663181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.663188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.440 [2024-12-05 20:43:42.663196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.440 [2024-12-05 20:43:42.663202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.441 [2024-12-05 20:43:42.663210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.441 [2024-12-05 20:43:42.663217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.441 [2024-12-05 20:43:42.663225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.441 [2024-12-05 20:43:42.663232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.441 [2024-12-05 20:43:42.663239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.441 [2024-12-05 20:43:42.663246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.441 [2024-12-05 20:43:42.663253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.441 [2024-12-05 20:43:42.663261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.441 [2024-12-05 20:43:42.663269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.441 [2024-12-05 20:43:42.663275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.441 [2024-12-05 20:43:42.663283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.441 [2024-12-05 20:43:42.663290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.441 [2024-12-05 20:43:42.663298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.441 [2024-12-05 20:43:42.663305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.441 [2024-12-05 20:43:42.663313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.441 [2024-12-05 20:43:42.663319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.441 [2024-12-05 20:43:42.663327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba37f0 is same with the state(6) to be set 00:23:49.441 [2024-12-05 20:43:42.664419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:49.441 [2024-12-05 20:43:42.664435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:49.441 [2024-12-05 20:43:42.664444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:49.441 [2024-12-05 20:43:42.664452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:49.441 task offset: 32768 on job bdev=Nvme6n1 fails 00:23:49.441 00:23:49.441 Latency(us) 00:23:49.441 [2024-12-05T19:43:42.882Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.441 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:49.441 Job: Nvme1n1 ended in about 0.91 seconds with error 00:23:49.441 Verification LBA range: start 0x0 length 0x400 00:23:49.441 Nvme1n1 : 0.91 210.65 13.17 70.22 0.00 225652.13 15609.48 203042.44 00:23:49.441 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:49.441 Job: Nvme2n1 ended in about 0.92 seconds with error 00:23:49.441 Verification LBA range: start 0x0 length 0x400 00:23:49.441 Nvme2n1 : 0.92 208.90 13.06 69.63 0.00 223894.57 16681.89 198276.19 00:23:49.441 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:49.441 Job: Nvme3n1 ended in about 0.92 seconds with error 00:23:49.441 Verification LBA range: start 0x0 length 0x400 00:23:49.441 Nvme3n1 : 0.92 282.22 17.64 69.47 0.00 174506.70 12749.73 196369.69 00:23:49.441 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:49.441 Job: Nvme4n1 ended in about 0.92 seconds with error 00:23:49.441 Verification LBA range: start 0x0 length 0x400 00:23:49.441 Nvme4n1 : 0.92 212.21 13.26 69.29 0.00 214588.58 5332.25 205902.20 00:23:49.441 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:49.441 Job: Nvme5n1 ended in about 0.92 seconds with error 00:23:49.441 Verification LBA range: start 0x0 length 0x400 00:23:49.441 Nvme5n1 : 0.92 279.43 17.46 69.86 0.00 169988.75 13405.09 188743.68 00:23:49.441 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:49.441 Job: Nvme6n1 ended in about 0.91 seconds with error 00:23:49.441 Verification LBA range: start 0x0 length 0x400 00:23:49.441 Nvme6n1 : 0.91 281.87 17.62 70.47 0.00 165547.66 14834.97 195416.44 00:23:49.441 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:49.441 Job: Nvme7n1 ended in about 0.91 seconds with error 00:23:49.441 Verification LBA range: start 0x0 length 0x400 00:23:49.441 Nvme7n1 : 0.91 281.58 17.60 70.39 0.00 162895.31 13881.72 205902.20 00:23:49.441 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:49.441 Job: Nvme8n1 ended in about 0.93 seconds with error 00:23:49.441 Verification LBA range: start 0x0 length 0x400 00:23:49.441 Nvme8n1 : 0.93 207.42 12.96 69.14 0.00 204259.37 15371.17 195416.44 00:23:49.441 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:49.441 Job: Nvme9n1 ended in about 0.93 seconds with error 00:23:49.441 Verification LBA range: start 0x0 length 0x400 00:23:49.441 Nvme9n1 : 0.93 206.20 12.89 68.73 0.00 202046.84 16801.05 210668.45 00:23:49.441 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:49.441 Job: Nvme10n1 ended in about 0.93 seconds with error 00:23:49.441 Verification LBA range: start 0x0 length 0x400 00:23:49.441 Nvme10n1 : 0.93 207.01 12.94 69.00 0.00 197641.31 16920.20 215434.71 00:23:49.441 [2024-12-05T19:43:42.882Z] =================================================================================================================== 00:23:49.441 [2024-12-05T19:43:42.882Z] Total : 2377.49 148.59 696.21 0.00 191758.45 5332.25 215434.71 00:23:49.441 [2024-12-05 20:43:42.692685] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:49.441 [2024-12-05 20:43:42.692729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:49.441 [2024-12-05 20:43:42.693090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.441 [2024-12-05 20:43:42.693109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7b3b0 with addr=10.0.0.2, port=4420 00:23:49.441 [2024-12-05 20:43:42.693121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7b3b0 is same with the state(6) to be set 00:23:49.441 [2024-12-05 20:43:42.693136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1915320 (9): Bad file descriptor 00:23:49.441 [2024-12-05 20:43:42.693149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1920440 (9): Bad file descriptor 00:23:49.441 [2024-12-05 20:43:42.693159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19167a0 (9): Bad file descriptor 00:23:49.441 [2024-12-05 20:43:42.693167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8ec40 (9): Bad file descriptor 00:23:49.441 [2024-12-05 20:43:42.693482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.441 [2024-12-05 20:43:42.693498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1835610 with addr=10.0.0.2, port=4420 00:23:49.441 [2024-12-05 20:43:42.693506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835610 is same with the state(6) to be set 00:23:49.441 [2024-12-05 20:43:42.693733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.441 [2024-12-05 20:43:42.693745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19212e0 with addr=10.0.0.2, port=4420 00:23:49.441 [2024-12-05 20:43:42.693752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19212e0 is same with the state(6) to be set 00:23:49.441 [2024-12-05 20:43:42.693957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.441 [2024-12-05 20:43:42.693967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d415d0 with addr=10.0.0.2, port=4420 00:23:49.441 [2024-12-05 20:43:42.693975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d415d0 is same with the state(6) to be set 00:23:49.441 [2024-12-05 20:43:42.694121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.441 [2024-12-05 20:43:42.694132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d4c410 with addr=10.0.0.2, port=4420 00:23:49.441 [2024-12-05 20:43:42.694144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4c410 is same with the state(6) to be set 00:23:49.441 [2024-12-05 20:43:42.694223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.441 [2024-12-05 20:43:42.694234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d75290 with addr=10.0.0.2, port=4420 00:23:49.441 [2024-12-05 20:43:42.694241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d75290 is same with the state(6) to be set 00:23:49.441 [2024-12-05 20:43:42.694249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7b3b0 (9): Bad file descriptor 00:23:49.441 [2024-12-05 20:43:42.694258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:49.441 [2024-12-05 20:43:42.694265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:49.441 [2024-12-05 20:43:42.694273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:49.441 [2024-12-05 20:43:42.694281] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:49.441 [2024-12-05 20:43:42.694290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:49.441 [2024-12-05 20:43:42.694296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:49.441 [2024-12-05 20:43:42.694303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:49.441 [2024-12-05 20:43:42.694309] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:49.442 [2024-12-05 20:43:42.694316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:49.442 [2024-12-05 20:43:42.694322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:49.442 [2024-12-05 20:43:42.694328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:49.442 [2024-12-05 20:43:42.694334] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:49.442 [2024-12-05 20:43:42.694341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:49.442 [2024-12-05 20:43:42.694347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:49.442 [2024-12-05 20:43:42.694353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:49.442 [2024-12-05 20:43:42.694359] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:49.442 [2024-12-05 20:43:42.694408] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:49.442 [2024-12-05 20:43:42.694707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1835610 (9): Bad file descriptor 00:23:49.442 [2024-12-05 20:43:42.694722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19212e0 (9): Bad file descriptor 00:23:49.442 [2024-12-05 20:43:42.694731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d415d0 (9): Bad file descriptor 00:23:49.442 [2024-12-05 20:43:42.694739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4c410 (9): Bad file descriptor 00:23:49.442 [2024-12-05 20:43:42.694748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d75290 (9): Bad file descriptor 00:23:49.442 [2024-12-05 20:43:42.694756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:49.442 [2024-12-05 20:43:42.694766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:49.442 [2024-12-05 20:43:42.694773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:49.442 [2024-12-05 20:43:42.694779] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:49.442 [2024-12-05 20:43:42.694813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:49.442 [2024-12-05 20:43:42.694825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:49.442 [2024-12-05 20:43:42.694834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:49.442 [2024-12-05 20:43:42.694842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:49.442 [2024-12-05 20:43:42.694868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:49.442 [2024-12-05 20:43:42.694875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:49.442 [2024-12-05 20:43:42.694881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:49.442 [2024-12-05 20:43:42.694887] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:49.442 [2024-12-05 20:43:42.694894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:49.442 [2024-12-05 20:43:42.694900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:49.442 [2024-12-05 20:43:42.694907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:49.442 [2024-12-05 20:43:42.694913] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:49.442 [2024-12-05 20:43:42.694920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:49.442 [2024-12-05 20:43:42.694926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:49.442 [2024-12-05 20:43:42.694932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:49.442 [2024-12-05 20:43:42.694938] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:49.442 [2024-12-05 20:43:42.694944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:49.442 [2024-12-05 20:43:42.694950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:49.442 [2024-12-05 20:43:42.694957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:49.442 [2024-12-05 20:43:42.694962] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:49.442 [2024-12-05 20:43:42.694969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:49.442 [2024-12-05 20:43:42.694975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:49.442 [2024-12-05 20:43:42.694981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:49.442 [2024-12-05 20:43:42.694986] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:49.442 [2024-12-05 20:43:42.695241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.442 [2024-12-05 20:43:42.695257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d8ec40 with addr=10.0.0.2, port=4420 00:23:49.442 [2024-12-05 20:43:42.695269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8ec40 is same with the state(6) to be set 00:23:49.442 [2024-12-05 20:43:42.695329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.442 [2024-12-05 20:43:42.695338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19167a0 with addr=10.0.0.2, port=4420 00:23:49.442 [2024-12-05 20:43:42.695346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19167a0 is same with the state(6) to be set 00:23:49.442 [2024-12-05 20:43:42.695540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.442 [2024-12-05 20:43:42.695551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1920440 with addr=10.0.0.2, port=4420 00:23:49.442 [2024-12-05 20:43:42.695559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1920440 is same with the state(6) to be set 00:23:49.442 [2024-12-05 20:43:42.695630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.442 [2024-12-05 20:43:42.695641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1915320 with addr=10.0.0.2, port=4420 00:23:49.442 [2024-12-05 20:43:42.695649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915320 is same with the state(6) to be set 00:23:49.442 [2024-12-05 20:43:42.695678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8ec40 (9): Bad file descriptor 00:23:49.442 [2024-12-05 20:43:42.695688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19167a0 (9): Bad file descriptor 00:23:49.442 [2024-12-05 20:43:42.695697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1920440 (9): Bad file descriptor 00:23:49.442 [2024-12-05 20:43:42.695705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1915320 (9): Bad file descriptor 00:23:49.442 [2024-12-05 20:43:42.695728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:49.442 [2024-12-05 20:43:42.695735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:49.442 [2024-12-05 20:43:42.695742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:49.442 [2024-12-05 20:43:42.695749] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:49.442 [2024-12-05 20:43:42.695757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:49.442 [2024-12-05 20:43:42.695762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:49.442 [2024-12-05 20:43:42.695770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:49.442 [2024-12-05 20:43:42.695775] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:49.442 [2024-12-05 20:43:42.695782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:49.442 [2024-12-05 20:43:42.695788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:49.442 [2024-12-05 20:43:42.695794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:49.442 [2024-12-05 20:43:42.695801] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:49.442 [2024-12-05 20:43:42.695807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:49.442 [2024-12-05 20:43:42.695813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:49.442 [2024-12-05 20:43:42.695818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:49.442 [2024-12-05 20:43:42.695826] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:49.701 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:23:50.637 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 428862 00:23:50.637 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:23:50.638 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 428862 00:23:50.638 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:50.638 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:50.638 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:23:50.638 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:50.638 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 428862 00:23:50.638 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:23:50.638 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:50.638 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:23:50.638 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:23:50.638 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:23:50.638 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:50.638 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:23:50.638 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:50.638 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:50.638 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:50.638 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:50.638 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:50.638 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:23:50.638 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:50.638 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:23:50.638 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:50.638 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:50.638 rmmod nvme_tcp 00:23:50.638 rmmod nvme_fabrics 00:23:50.898 rmmod nvme_keyring 00:23:50.898 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:50.898 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:23:50.898 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:23:50.898 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 428551 ']' 00:23:50.898 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 428551 00:23:50.898 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 428551 ']' 00:23:50.898 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 428551 00:23:50.898 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (428551) - No such process 00:23:50.898 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 428551 is not found' 00:23:50.898 Process with pid 428551 is not found 00:23:50.898 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:50.898 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:50.898 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:50.898 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:23:50.898 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:23:50.898 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:50.898 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:23:50.898 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:50.898 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:50.898 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.898 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:50.898 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.810 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:52.810 00:23:52.810 real 0m8.339s 00:23:52.810 user 0m21.971s 00:23:52.810 sys 0m1.377s 00:23:52.810 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:52.810 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:52.810 ************************************ 00:23:52.810 END TEST nvmf_shutdown_tc3 00:23:52.810 ************************************ 00:23:52.810 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:23:52.810 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:23:52.810 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:23:52.810 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:52.810 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:52.810 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:53.071 ************************************ 00:23:53.071 START TEST nvmf_shutdown_tc4 00:23:53.071 ************************************ 00:23:53.071 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:23:53.071 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:23:53.071 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:53.071 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:53.071 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:53.071 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:53.071 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:53.071 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:53.071 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.071 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:53.071 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.071 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:53.071 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:53.071 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:53.071 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:53.071 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:53.071 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:53.071 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:53.071 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:53.071 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:53.071 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:53.071 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:53.072 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:53.072 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:53.072 Found net devices under 0000:af:00.0: cvl_0_0 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.072 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:53.072 Found net devices under 0000:af:00.1: cvl_0_1 00:23:53.073 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.073 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:53.073 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:53.073 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:53.073 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:53.073 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:53.073 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:53.073 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:53.073 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:53.073 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:53.073 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:53.073 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:53.073 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:53.073 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:53.073 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:53.073 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:53.073 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:53.073 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:53.073 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:53.073 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:53.073 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:53.073 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:53.073 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:53.073 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:53.073 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:53.332 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:53.332 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:53.332 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:53.332 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:53.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:53.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:23:53.332 00:23:53.332 --- 10.0.0.2 ping statistics --- 00:23:53.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.332 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:23:53.332 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:53.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:53.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:23:53.332 00:23:53.332 --- 10.0.0.1 ping statistics --- 00:23:53.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.332 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:23:53.333 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:53.333 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:23:53.333 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:53.333 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:53.333 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:53.333 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:53.333 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:53.333 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:53.333 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:53.333 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:53.333 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:53.333 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:53.333 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:53.333 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=430203 00:23:53.333 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 430203 00:23:53.333 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:53.333 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 430203 ']' 00:23:53.333 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.333 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.333 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.333 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.333 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:53.333 [2024-12-05 20:43:46.660889] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:23:53.333 [2024-12-05 20:43:46.660932] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.333 [2024-12-05 20:43:46.736076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:53.591 [2024-12-05 20:43:46.775830] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.591 [2024-12-05 20:43:46.775867] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.591 [2024-12-05 20:43:46.775873] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.591 [2024-12-05 20:43:46.775879] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.591 [2024-12-05 20:43:46.775884] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.591 [2024-12-05 20:43:46.777560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:53.591 [2024-12-05 20:43:46.777674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:53.591 [2024-12-05 20:43:46.777783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.591 [2024-12-05 20:43:46.777784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:54.159 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.159 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:23:54.159 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:54.159 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:54.159 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:54.159 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:54.159 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:54.159 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.159 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:54.159 [2024-12-05 20:43:47.506768] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.159 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.159 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:54.159 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:54.159 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:54.159 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:54.159 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:54.159 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:54.159 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:54.159 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:54.159 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:54.159 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:54.159 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:54.159 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:54.159 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:54.160 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:54.160 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:54.160 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:54.160 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:54.160 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:54.160 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:54.160 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:54.160 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:54.160 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:54.160 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:54.160 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:54.160 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:54.160 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:54.160 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.160 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:54.420 Malloc1 00:23:54.420 [2024-12-05 20:43:47.628281] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.420 Malloc2 00:23:54.420 Malloc3 00:23:54.420 Malloc4 00:23:54.420 Malloc5 00:23:54.420 Malloc6 00:23:54.420 Malloc7 00:23:54.679 Malloc8 00:23:54.679 Malloc9 00:23:54.680 Malloc10 00:23:54.680 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.680 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:54.680 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:54.680 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:54.680 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=430488 00:23:54.680 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:54.680 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:23:54.940 [2024-12-05 20:43:48.135750] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:00.269 20:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:00.269 20:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 430203 00:24:00.269 20:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 430203 ']' 00:24:00.269 20:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 430203 00:24:00.269 20:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:24:00.269 20:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:00.269 20:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 430203 00:24:00.269 20:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:00.269 20:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:00.269 20:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 430203' 00:24:00.269 killing process with pid 430203 00:24:00.269 20:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 430203 00:24:00.269 20:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 430203 00:24:00.269 Write completed with error (sct=0, sc=8) 00:24:00.269 Write completed with error (sct=0, sc=8) 00:24:00.269 Write completed with error (sct=0, sc=8) 00:24:00.269 Write completed with error (sct=0, sc=8) 00:24:00.269 starting I/O failed: -6 00:24:00.269 Write completed with error (sct=0, sc=8) 00:24:00.269 Write completed with error (sct=0, sc=8) 00:24:00.269 Write completed with error (sct=0, sc=8) 00:24:00.269 Write completed with error (sct=0, sc=8) 00:24:00.269 starting I/O failed: -6 00:24:00.269 Write completed with error (sct=0, sc=8) 00:24:00.269 Write completed with error (sct=0, sc=8) 00:24:00.269 Write completed with error (sct=0, sc=8) 00:24:00.269 Write completed with error (sct=0, sc=8) 00:24:00.269 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 [2024-12-05 20:43:53.135572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 [2024-12-05 20:43:53.135737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d1d0 is same with starting I/O failed: -6 00:24:00.270 the state(6) to be set 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 [2024-12-05 20:43:53.135772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d1d0 is same with the state(6) to be set 00:24:00.270 [2024-12-05 20:43:53.135779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d1d0 is same with the state(6) to be set 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 [2024-12-05 20:43:53.135940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6a0 is same with the state(6) to be set 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 [2024-12-05 20:43:53.135965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6a0 is same with the state(6) to be set 00:24:00.270 [2024-12-05 20:43:53.135973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6a0 is same with Write completed with error (sct=0, sc=8) 00:24:00.270 the state(6) to be set 00:24:00.270 starting I/O failed: -6 00:24:00.270 [2024-12-05 20:43:53.135981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6a0 is same with the state(6) to be set 00:24:00.270 [2024-12-05 20:43:53.135987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6a0 is same with the state(6) to be set 00:24:00.270 [2024-12-05 20:43:53.135993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6a0 is same with Write completed with error (sct=0, sc=8) 00:24:00.270 the state(6) to be set 00:24:00.270 [2024-12-05 20:43:53.135999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6a0 is same with the state(6) to be set 00:24:00.270 [2024-12-05 20:43:53.136005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6a0 is same with Write completed with error (sct=0, sc=8) 00:24:00.270 the state(6) to be set 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 [2024-12-05 20:43:53.136292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146db70 is same with the state(6) to be set 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 [2024-12-05 20:43:53.136315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146db70 is same with the state(6) to be set 00:24:00.270 [2024-12-05 20:43:53.136323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146db70 is same with Write completed with error (sct=0, sc=8) 00:24:00.270 the state(6) to be set 00:24:00.270 starting I/O failed: -6 00:24:00.270 [2024-12-05 20:43:53.136331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146db70 is same with the state(6) to be set 00:24:00.270 [2024-12-05 20:43:53.136338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146db70 is same with the state(6) to be set 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 [2024-12-05 20:43:53.136345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146db70 is same with the state(6) to be set 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 [2024-12-05 20:43:53.136466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 [2024-12-05 20:43:53.136734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cd00 is same with Write completed with error (sct=0, sc=8) 00:24:00.270 the state(6) to be set 00:24:00.270 starting I/O failed: -6 00:24:00.270 [2024-12-05 20:43:53.136756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cd00 is same with the state(6) to be set 00:24:00.270 [2024-12-05 20:43:53.136764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cd00 is same with the state(6) to be set 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 [2024-12-05 20:43:53.136769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cd00 is same with starting I/O failed: -6 00:24:00.270 the state(6) to be set 00:24:00.270 [2024-12-05 20:43:53.136777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cd00 is same with the state(6) to be set 00:24:00.270 [2024-12-05 20:43:53.136782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cd00 is same with the state(6) to be set 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.270 starting I/O failed: -6 00:24:00.270 Write completed with error (sct=0, sc=8) 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 [2024-12-05 20:43:53.137258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ff150 is same with Write completed with error (sct=0, sc=8) 00:24:00.271 the state(6) to be set 00:24:00.271 [2024-12-05 20:43:53.137277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ff150 is same with Write completed with error (sct=0, sc=8) 00:24:00.271 the state(6) to be set 00:24:00.271 starting I/O failed: -6 00:24:00.271 [2024-12-05 20:43:53.137288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ff150 is same with the state(6) to be set 00:24:00.271 [2024-12-05 20:43:53.137295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ff150 is same with the state(6) to be set 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 [2024-12-05 20:43:53.137301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ff150 is same with the state(6) to be set 00:24:00.271 starting I/O failed: -6 00:24:00.271 [2024-12-05 20:43:53.137307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ff150 is same with the state(6) to be set 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 [2024-12-05 20:43:53.137425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 [2024-12-05 20:43:53.139084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:00.271 NVMe io qpair process completion error 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 starting I/O failed: -6 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.271 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 [2024-12-05 20:43:53.140006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 [2024-12-05 20:43:53.140842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 [2024-12-05 20:43:53.141750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.272 starting I/O failed: -6 00:24:00.272 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 [2024-12-05 20:43:53.143150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.273 NVMe io qpair process completion error 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.273 starting I/O failed: -6 00:24:00.273 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 [2024-12-05 20:43:53.144003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 [2024-12-05 20:43:53.144821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 [2024-12-05 20:43:53.145740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed: -6 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 [2024-12-05 20:43:53.147398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.275 NVMe io qpair process completion error 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 [2024-12-05 20:43:53.148807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 starting I/O failed: -6 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.275 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 [2024-12-05 20:43:53.149655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 [2024-12-05 20:43:53.150582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.276 Write completed with error (sct=0, sc=8) 00:24:00.276 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 [2024-12-05 20:43:53.152454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.277 NVMe io qpair process completion error 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 [2024-12-05 20:43:53.153373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 starting I/O failed: -6 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.277 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 [2024-12-05 20:43:53.154195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 [2024-12-05 20:43:53.155097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.278 starting I/O failed: -6 00:24:00.278 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 [2024-12-05 20:43:53.156802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.279 NVMe io qpair process completion error 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 [2024-12-05 20:43:53.157740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 [2024-12-05 20:43:53.158537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.279 Write completed with error (sct=0, sc=8) 00:24:00.279 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 [2024-12-05 20:43:53.159488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.280 starting I/O failed: -6 00:24:00.280 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 [2024-12-05 20:43:53.164191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.281 NVMe io qpair process completion error 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 [2024-12-05 20:43:53.165056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 Write completed with error (sct=0, sc=8) 00:24:00.281 starting I/O failed: -6 00:24:00.282 [2024-12-05 20:43:53.165876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 [2024-12-05 20:43:53.166798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 Write completed with error (sct=0, sc=8) 00:24:00.282 starting I/O failed: -6 00:24:00.282 [2024-12-05 20:43:53.169032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.283 NVMe io qpair process completion error 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 [2024-12-05 20:43:53.169890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 [2024-12-05 20:43:53.170710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 Write completed with error (sct=0, sc=8) 00:24:00.283 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 [2024-12-05 20:43:53.171682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 [2024-12-05 20:43:53.173511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.284 NVMe io qpair process completion error 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 starting I/O failed: -6 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.284 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 [2024-12-05 20:43:53.174833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 [2024-12-05 20:43:53.175880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.285 Write completed with error (sct=0, sc=8) 00:24:00.285 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 [2024-12-05 20:43:53.178941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.286 NVMe io qpair process completion error 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 [2024-12-05 20:43:53.179801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 [2024-12-05 20:43:53.180624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.286 Write completed with error (sct=0, sc=8) 00:24:00.286 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 [2024-12-05 20:43:53.182996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 Write completed with error (sct=0, sc=8) 00:24:00.287 starting I/O failed: -6 00:24:00.287 [2024-12-05 20:43:53.185776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.287 NVMe io qpair process completion error 00:24:00.287 Initializing NVMe Controllers 00:24:00.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:24:00.287 Controller IO queue size 128, less than required. 00:24:00.287 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:00.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:24:00.287 Controller IO queue size 128, less than required. 00:24:00.287 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:00.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:24:00.287 Controller IO queue size 128, less than required. 00:24:00.287 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:00.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:24:00.287 Controller IO queue size 128, less than required. 00:24:00.287 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:00.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:24:00.287 Controller IO queue size 128, less than required. 00:24:00.287 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:00.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:24:00.287 Controller IO queue size 128, less than required. 00:24:00.287 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:00.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:24:00.287 Controller IO queue size 128, less than required. 00:24:00.287 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:00.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:24:00.287 Controller IO queue size 128, less than required. 00:24:00.287 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:00.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:24:00.287 Controller IO queue size 128, less than required. 00:24:00.287 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:00.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:00.287 Controller IO queue size 128, less than required. 00:24:00.288 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:00.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:24:00.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:24:00.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:24:00.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:24:00.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:24:00.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:24:00.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:24:00.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:24:00.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:24:00.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:00.288 Initialization complete. Launching workers. 00:24:00.288 ======================================================== 00:24:00.288 Latency(us) 00:24:00.288 Device Information : IOPS MiB/s Average min max 00:24:00.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2406.77 103.42 53186.49 838.68 92590.01 00:24:00.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2423.54 104.14 52825.11 838.24 127187.14 00:24:00.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2418.23 103.91 52957.21 859.86 101982.12 00:24:00.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2416.53 103.84 53006.54 860.84 101289.71 00:24:00.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2425.03 104.20 52831.38 823.46 100891.10 00:24:00.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2418.02 103.90 53030.65 814.48 98623.16 00:24:00.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2405.49 103.36 53319.64 851.12 101485.07 00:24:00.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2398.28 103.05 53489.50 888.07 104062.39 00:24:00.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2387.02 102.57 53777.29 858.94 107098.32 00:24:00.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2395.52 102.93 53036.64 663.14 96682.19 00:24:00.288 ======================================================== 00:24:00.288 Total : 24094.43 1035.31 53144.79 663.14 127187.14 00:24:00.288 00:24:00.288 [2024-12-05 20:43:53.190860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36390 is same with the state(6) to be set 00:24:00.288 [2024-12-05 20:43:53.190904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe369f0 is same with the state(6) to be set 00:24:00.288 [2024-12-05 20:43:53.190932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe376b0 is same with the state(6) to be set 00:24:00.288 [2024-12-05 20:43:53.190958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe37050 is same with the state(6) to be set 00:24:00.288 [2024-12-05 20:43:53.190984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe366c0 is same with the state(6) to be set 00:24:00.288 [2024-12-05 20:43:53.191011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36060 is same with the state(6) to be set 00:24:00.288 [2024-12-05 20:43:53.191037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe379e0 is same with the state(6) to be set 00:24:00.288 [2024-12-05 20:43:53.191067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe38540 is same with the state(6) to be set 00:24:00.288 [2024-12-05 20:43:53.191093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe37380 is same with the state(6) to be set 00:24:00.288 [2024-12-05 20:43:53.191118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe38360 is same with the state(6) to be set 00:24:00.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:24:00.288 20:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 430488 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 430488 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 430488 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:01.276 rmmod nvme_tcp 00:24:01.276 rmmod nvme_fabrics 00:24:01.276 rmmod nvme_keyring 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 430203 ']' 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 430203 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 430203 ']' 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 430203 00:24:01.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (430203) - No such process 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 430203 is not found' 00:24:01.276 Process with pid 430203 is not found 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:01.276 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:01.277 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:24:01.277 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:24:01.277 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:01.277 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:24:01.277 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:01.277 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:01.277 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.277 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:01.277 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.319 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:03.319 00:24:03.319 real 0m10.401s 00:24:03.319 user 0m27.331s 00:24:03.319 sys 0m5.323s 00:24:03.319 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:03.319 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:03.319 ************************************ 00:24:03.319 END TEST nvmf_shutdown_tc4 00:24:03.319 ************************************ 00:24:03.319 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:24:03.319 00:24:03.319 real 0m42.038s 00:24:03.319 user 1m45.232s 00:24:03.319 sys 0m14.239s 00:24:03.319 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:03.319 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:03.319 ************************************ 00:24:03.319 END TEST nvmf_shutdown 00:24:03.319 ************************************ 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:03.611 ************************************ 00:24:03.611 START TEST nvmf_nsid 00:24:03.611 ************************************ 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:03.611 * Looking for test storage... 00:24:03.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:03.611 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:03.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.611 --rc genhtml_branch_coverage=1 00:24:03.611 --rc genhtml_function_coverage=1 00:24:03.612 --rc genhtml_legend=1 00:24:03.612 --rc geninfo_all_blocks=1 00:24:03.612 --rc geninfo_unexecuted_blocks=1 00:24:03.612 00:24:03.612 ' 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:03.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.612 --rc genhtml_branch_coverage=1 00:24:03.612 --rc genhtml_function_coverage=1 00:24:03.612 --rc genhtml_legend=1 00:24:03.612 --rc geninfo_all_blocks=1 00:24:03.612 --rc geninfo_unexecuted_blocks=1 00:24:03.612 00:24:03.612 ' 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:03.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.612 --rc genhtml_branch_coverage=1 00:24:03.612 --rc genhtml_function_coverage=1 00:24:03.612 --rc genhtml_legend=1 00:24:03.612 --rc geninfo_all_blocks=1 00:24:03.612 --rc geninfo_unexecuted_blocks=1 00:24:03.612 00:24:03.612 ' 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:03.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.612 --rc genhtml_branch_coverage=1 00:24:03.612 --rc genhtml_function_coverage=1 00:24:03.612 --rc genhtml_legend=1 00:24:03.612 --rc geninfo_all_blocks=1 00:24:03.612 --rc geninfo_unexecuted_blocks=1 00:24:03.612 00:24:03.612 ' 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:03.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.612 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.612 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:03.612 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:03.612 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:24:03.612 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:10.435 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:10.435 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:24:10.435 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:10.435 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:10.436 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:10.436 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:10.436 Found net devices under 0000:af:00.0: cvl_0_0 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:10.436 Found net devices under 0000:af:00.1: cvl_0_1 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:10.436 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:10.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:24:10.437 00:24:10.437 --- 10.0.0.2 ping statistics --- 00:24:10.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.437 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:10.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:24:10.437 00:24:10.437 --- 10.0.0.1 ping statistics --- 00:24:10.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.437 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=435207 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 435207 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 435207 ']' 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:10.437 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:10.437 [2024-12-05 20:44:02.993016] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:24:10.437 [2024-12-05 20:44:02.993079] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.437 [2024-12-05 20:44:03.071162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.437 [2024-12-05 20:44:03.108652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.437 [2024-12-05 20:44:03.108685] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.437 [2024-12-05 20:44:03.108691] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.437 [2024-12-05 20:44:03.108696] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.437 [2024-12-05 20:44:03.108701] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.437 [2024-12-05 20:44:03.109247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.437 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:10.437 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:10.437 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:10.437 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:10.437 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:10.437 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.437 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:10.437 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:24:10.437 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=435392 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=f6b3819c-9a6b-4036-98cb-10d7ac521834 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=1421c631-6927-44f9-a27a-ddbafe885521 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=38dca65d-8f42-491d-901b-7cad5ce21137 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:10.438 [2024-12-05 20:44:03.278590] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:24:10.438 [2024-12-05 20:44:03.278630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid435392 ] 00:24:10.438 null0 00:24:10.438 null1 00:24:10.438 null2 00:24:10.438 [2024-12-05 20:44:03.303989] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.438 [2024-12-05 20:44:03.328200] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.438 [2024-12-05 20:44:03.349461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 435392 /var/tmp/tgt2.sock 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 435392 ']' 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:24:10.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:10.438 [2024-12-05 20:44:03.392886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:10.438 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:24:10.697 [2024-12-05 20:44:03.904351] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.697 [2024-12-05 20:44:03.920457] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:24:10.697 nvme0n1 nvme0n2 00:24:10.697 nvme1n1 00:24:10.697 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:24:10.697 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:24:10.697 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 00:24:12.074 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:24:12.074 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:24:12.074 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:24:12.074 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:24:12.074 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:24:12.074 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:24:12.074 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:24:12.074 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:12.074 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:12.074 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:12.074 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:24:12.074 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:24:12.074 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid f6b3819c-9a6b-4036-98cb-10d7ac521834 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=f6b3819c9a6b403698cb10d7ac521834 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo F6B3819C9A6B403698CB10D7AC521834 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ F6B3819C9A6B403698CB10D7AC521834 == \F\6\B\3\8\1\9\C\9\A\6\B\4\0\3\6\9\8\C\B\1\0\D\7\A\C\5\2\1\8\3\4 ]] 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 1421c631-6927-44f9-a27a-ddbafe885521 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=1421c631692744f9a27addbafe885521 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 1421C631692744F9A27ADDBAFE885521 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 1421C631692744F9A27ADDBAFE885521 == \1\4\2\1\C\6\3\1\6\9\2\7\4\4\F\9\A\2\7\A\D\D\B\A\F\E\8\8\5\5\2\1 ]] 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 38dca65d-8f42-491d-901b-7cad5ce21137 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=38dca65d8f42491d901b7cad5ce21137 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 38DCA65D8F42491D901B7CAD5CE21137 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 38DCA65D8F42491D901B7CAD5CE21137 == \3\8\D\C\A\6\5\D\8\F\4\2\4\9\1\D\9\0\1\B\7\C\A\D\5\C\E\2\1\1\3\7 ]] 00:24:13.008 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:24:13.267 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:13.267 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:24:13.267 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 435392 00:24:13.267 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 435392 ']' 00:24:13.267 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 435392 00:24:13.267 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:13.267 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:13.267 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 435392 00:24:13.267 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:13.267 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:13.267 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 435392' 00:24:13.267 killing process with pid 435392 00:24:13.267 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 435392 00:24:13.267 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 435392 00:24:13.834 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:24:13.834 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:13.834 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:24:13.834 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:13.834 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:24:13.834 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:13.834 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:13.834 rmmod nvme_tcp 00:24:13.834 rmmod nvme_fabrics 00:24:13.834 rmmod nvme_keyring 00:24:13.834 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:13.834 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:24:13.834 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:24:13.834 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 435207 ']' 00:24:13.834 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 435207 00:24:13.834 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 435207 ']' 00:24:13.834 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 435207 00:24:13.834 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:13.834 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:13.834 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 435207 00:24:13.834 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:13.834 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:13.834 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 435207' 00:24:13.834 killing process with pid 435207 00:24:13.834 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 435207 00:24:13.834 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 435207 00:24:13.834 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:13.834 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:13.834 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:13.834 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:24:13.834 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:24:13.834 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:13.834 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:24:13.834 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:13.834 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:13.834 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.834 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:13.834 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.364 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:16.364 00:24:16.364 real 0m12.544s 00:24:16.364 user 0m9.877s 00:24:16.364 sys 0m5.555s 00:24:16.364 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:16.364 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:16.364 ************************************ 00:24:16.364 END TEST nvmf_nsid 00:24:16.364 ************************************ 00:24:16.365 20:44:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:24:16.365 00:24:16.365 real 12m8.049s 00:24:16.365 user 26m10.579s 00:24:16.365 sys 3m37.835s 00:24:16.365 20:44:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:16.365 20:44:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:16.365 ************************************ 00:24:16.365 END TEST nvmf_target_extra 00:24:16.365 ************************************ 00:24:16.365 20:44:09 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:16.365 20:44:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:16.365 20:44:09 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:16.365 20:44:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:16.365 ************************************ 00:24:16.365 START TEST nvmf_host 00:24:16.365 ************************************ 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:16.365 * Looking for test storage... 00:24:16.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:16.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.365 --rc genhtml_branch_coverage=1 00:24:16.365 --rc genhtml_function_coverage=1 00:24:16.365 --rc genhtml_legend=1 00:24:16.365 --rc geninfo_all_blocks=1 00:24:16.365 --rc geninfo_unexecuted_blocks=1 00:24:16.365 00:24:16.365 ' 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:16.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.365 --rc genhtml_branch_coverage=1 00:24:16.365 --rc genhtml_function_coverage=1 00:24:16.365 --rc genhtml_legend=1 00:24:16.365 --rc geninfo_all_blocks=1 00:24:16.365 --rc geninfo_unexecuted_blocks=1 00:24:16.365 00:24:16.365 ' 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:16.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.365 --rc genhtml_branch_coverage=1 00:24:16.365 --rc genhtml_function_coverage=1 00:24:16.365 --rc genhtml_legend=1 00:24:16.365 --rc geninfo_all_blocks=1 00:24:16.365 --rc geninfo_unexecuted_blocks=1 00:24:16.365 00:24:16.365 ' 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:16.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.365 --rc genhtml_branch_coverage=1 00:24:16.365 --rc genhtml_function_coverage=1 00:24:16.365 --rc genhtml_legend=1 00:24:16.365 --rc geninfo_all_blocks=1 00:24:16.365 --rc geninfo_unexecuted_blocks=1 00:24:16.365 00:24:16.365 ' 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:16.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.365 ************************************ 00:24:16.365 START TEST nvmf_multicontroller 00:24:16.365 ************************************ 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:16.365 * Looking for test storage... 00:24:16.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:24:16.365 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:16.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.624 --rc genhtml_branch_coverage=1 00:24:16.624 --rc genhtml_function_coverage=1 00:24:16.624 --rc genhtml_legend=1 00:24:16.624 --rc geninfo_all_blocks=1 00:24:16.624 --rc geninfo_unexecuted_blocks=1 00:24:16.624 00:24:16.624 ' 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:16.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.624 --rc genhtml_branch_coverage=1 00:24:16.624 --rc genhtml_function_coverage=1 00:24:16.624 --rc genhtml_legend=1 00:24:16.624 --rc geninfo_all_blocks=1 00:24:16.624 --rc geninfo_unexecuted_blocks=1 00:24:16.624 00:24:16.624 ' 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:16.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.624 --rc genhtml_branch_coverage=1 00:24:16.624 --rc genhtml_function_coverage=1 00:24:16.624 --rc genhtml_legend=1 00:24:16.624 --rc geninfo_all_blocks=1 00:24:16.624 --rc geninfo_unexecuted_blocks=1 00:24:16.624 00:24:16.624 ' 00:24:16.624 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:16.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.625 --rc genhtml_branch_coverage=1 00:24:16.625 --rc genhtml_function_coverage=1 00:24:16.625 --rc genhtml_legend=1 00:24:16.625 --rc geninfo_all_blocks=1 00:24:16.625 --rc geninfo_unexecuted_blocks=1 00:24:16.625 00:24:16.625 ' 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:16.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:24:16.625 20:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:23.193 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:23.193 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:23.194 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:23.194 Found net devices under 0000:af:00.0: cvl_0_0 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:23.194 Found net devices under 0000:af:00.1: cvl_0_1 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:23.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:23.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.430 ms 00:24:23.194 00:24:23.194 --- 10.0.0.2 ping statistics --- 00:24:23.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.194 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:23.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:23.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:24:23.194 00:24:23.194 --- 10.0.0.1 ping statistics --- 00:24:23.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.194 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=439814 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 439814 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 439814 ']' 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:23.194 20:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.194 [2024-12-05 20:44:15.896895] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:24:23.194 [2024-12-05 20:44:15.896936] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.194 [2024-12-05 20:44:15.969549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:23.194 [2024-12-05 20:44:16.008484] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:23.194 [2024-12-05 20:44:16.008516] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:23.194 [2024-12-05 20:44:16.008522] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:23.194 [2024-12-05 20:44:16.008528] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:23.194 [2024-12-05 20:44:16.008533] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:23.194 [2024-12-05 20:44:16.009955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:23.194 [2024-12-05 20:44:16.010086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.195 [2024-12-05 20:44:16.010087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.454 [2024-12-05 20:44:16.763931] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.454 Malloc0 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.454 [2024-12-05 20:44:16.828148] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.454 [2024-12-05 20:44:16.836093] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.454 Malloc1 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=440095 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 440095 /var/tmp/bdevperf.sock 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 440095 ']' 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:23.454 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:23.713 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:23.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:23.713 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:23.713 20:44:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.713 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.713 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:23.713 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:23.713 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.713 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.972 NVMe0n1 00:24:23.972 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.972 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:23.972 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:23.972 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.972 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.972 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.972 1 00:24:23.972 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:23.972 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:23.972 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:23.972 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:23.972 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:23.972 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:23.972 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:23.972 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:23.972 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.972 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.972 request: 00:24:23.972 { 00:24:23.972 "name": "NVMe0", 00:24:23.972 "trtype": "tcp", 00:24:23.972 "traddr": "10.0.0.2", 00:24:23.972 "adrfam": "ipv4", 00:24:23.972 "trsvcid": "4420", 00:24:23.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.972 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:23.972 "hostaddr": "10.0.0.1", 00:24:23.972 "prchk_reftag": false, 00:24:23.972 "prchk_guard": false, 00:24:23.972 "hdgst": false, 00:24:23.972 "ddgst": false, 00:24:23.972 "allow_unrecognized_csi": false, 00:24:23.972 "method": "bdev_nvme_attach_controller", 00:24:23.972 "req_id": 1 00:24:23.972 } 00:24:23.972 Got JSON-RPC error response 00:24:23.972 response: 00:24:23.972 { 00:24:23.972 "code": -114, 00:24:23.972 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:23.972 } 00:24:23.972 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:23.972 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:23.972 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.973 request: 00:24:23.973 { 00:24:23.973 "name": "NVMe0", 00:24:23.973 "trtype": "tcp", 00:24:23.973 "traddr": "10.0.0.2", 00:24:23.973 "adrfam": "ipv4", 00:24:23.973 "trsvcid": "4420", 00:24:23.973 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:23.973 "hostaddr": "10.0.0.1", 00:24:23.973 "prchk_reftag": false, 00:24:23.973 "prchk_guard": false, 00:24:23.973 "hdgst": false, 00:24:23.973 "ddgst": false, 00:24:23.973 "allow_unrecognized_csi": false, 00:24:23.973 "method": "bdev_nvme_attach_controller", 00:24:23.973 "req_id": 1 00:24:23.973 } 00:24:23.973 Got JSON-RPC error response 00:24:23.973 response: 00:24:23.973 { 00:24:23.973 "code": -114, 00:24:23.973 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:23.973 } 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.973 request: 00:24:23.973 { 00:24:23.973 "name": "NVMe0", 00:24:23.973 "trtype": "tcp", 00:24:23.973 "traddr": "10.0.0.2", 00:24:23.973 "adrfam": "ipv4", 00:24:23.973 "trsvcid": "4420", 00:24:23.973 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.973 "hostaddr": "10.0.0.1", 00:24:23.973 "prchk_reftag": false, 00:24:23.973 "prchk_guard": false, 00:24:23.973 "hdgst": false, 00:24:23.973 "ddgst": false, 00:24:23.973 "multipath": "disable", 00:24:23.973 "allow_unrecognized_csi": false, 00:24:23.973 "method": "bdev_nvme_attach_controller", 00:24:23.973 "req_id": 1 00:24:23.973 } 00:24:23.973 Got JSON-RPC error response 00:24:23.973 response: 00:24:23.973 { 00:24:23.973 "code": -114, 00:24:23.973 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:24:23.973 } 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.973 request: 00:24:23.973 { 00:24:23.973 "name": "NVMe0", 00:24:23.973 "trtype": "tcp", 00:24:23.973 "traddr": "10.0.0.2", 00:24:23.973 "adrfam": "ipv4", 00:24:23.973 "trsvcid": "4420", 00:24:23.973 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.973 "hostaddr": "10.0.0.1", 00:24:23.973 "prchk_reftag": false, 00:24:23.973 "prchk_guard": false, 00:24:23.973 "hdgst": false, 00:24:23.973 "ddgst": false, 00:24:23.973 "multipath": "failover", 00:24:23.973 "allow_unrecognized_csi": false, 00:24:23.973 "method": "bdev_nvme_attach_controller", 00:24:23.973 "req_id": 1 00:24:23.973 } 00:24:23.973 Got JSON-RPC error response 00:24:23.973 response: 00:24:23.973 { 00:24:23.973 "code": -114, 00:24:23.973 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:23.973 } 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.973 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.232 NVMe0n1 00:24:24.232 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.232 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:24.232 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.232 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.232 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.232 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:24.232 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.232 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.232 00:24:24.232 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.232 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:24.232 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:24.232 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.232 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.490 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.490 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:24.490 20:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:25.424 { 00:24:25.424 "results": [ 00:24:25.424 { 00:24:25.424 "job": "NVMe0n1", 00:24:25.424 "core_mask": "0x1", 00:24:25.424 "workload": "write", 00:24:25.424 "status": "finished", 00:24:25.424 "queue_depth": 128, 00:24:25.424 "io_size": 4096, 00:24:25.424 "runtime": 1.004496, 00:24:25.424 "iops": 27259.44155078766, 00:24:25.424 "mibps": 106.48219355776429, 00:24:25.424 "io_failed": 0, 00:24:25.424 "io_timeout": 0, 00:24:25.424 "avg_latency_us": 4686.903639683667, 00:24:25.424 "min_latency_us": 2770.3854545454546, 00:24:25.424 "max_latency_us": 10128.290909090909 00:24:25.424 } 00:24:25.424 ], 00:24:25.424 "core_count": 1 00:24:25.424 } 00:24:25.424 20:44:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:25.424 20:44:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.424 20:44:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.424 20:44:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.424 20:44:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:24:25.424 20:44:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 440095 00:24:25.424 20:44:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 440095 ']' 00:24:25.424 20:44:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 440095 00:24:25.424 20:44:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:25.424 20:44:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:25.424 20:44:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 440095 00:24:25.683 20:44:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:25.683 20:44:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:25.683 20:44:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 440095' 00:24:25.683 killing process with pid 440095 00:24:25.683 20:44:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 440095 00:24:25.683 20:44:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 440095 00:24:25.683 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:25.683 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.683 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.683 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.683 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:25.683 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.683 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.683 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.683 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:24:25.683 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:25.683 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:25.683 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:25.684 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:24:25.684 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:24:25.684 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:25.684 [2024-12-05 20:44:16.937025] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:24:25.684 [2024-12-05 20:44:16.937080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid440095 ] 00:24:25.684 [2024-12-05 20:44:17.010115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.684 [2024-12-05 20:44:17.049468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.684 [2024-12-05 20:44:17.661729] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name a8102771-f88d-4229-b125-79f2604dd903 already exists 00:24:25.684 [2024-12-05 20:44:17.661756] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:a8102771-f88d-4229-b125-79f2604dd903 alias for bdev NVMe1n1 00:24:25.684 [2024-12-05 20:44:17.661763] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:25.684 Running I/O for 1 seconds... 00:24:25.684 27190.00 IOPS, 106.21 MiB/s 00:24:25.684 Latency(us) 00:24:25.684 [2024-12-05T19:44:19.125Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.684 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:25.684 NVMe0n1 : 1.00 27259.44 106.48 0.00 0.00 4686.90 2770.39 10128.29 00:24:25.684 [2024-12-05T19:44:19.125Z] =================================================================================================================== 00:24:25.684 [2024-12-05T19:44:19.125Z] Total : 27259.44 106.48 0.00 0.00 4686.90 2770.39 10128.29 00:24:25.684 Received shutdown signal, test time was about 1.000000 seconds 00:24:25.684 00:24:25.684 Latency(us) 00:24:25.684 [2024-12-05T19:44:19.125Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.684 [2024-12-05T19:44:19.125Z] =================================================================================================================== 00:24:25.684 [2024-12-05T19:44:19.125Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:25.684 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:25.684 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:25.684 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:25.684 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:24:25.684 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:25.684 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:24:25.684 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:25.684 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:24:25.684 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:25.684 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:25.684 rmmod nvme_tcp 00:24:25.684 rmmod nvme_fabrics 00:24:25.684 rmmod nvme_keyring 00:24:25.684 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:25.944 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:24:25.944 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:24:25.944 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 439814 ']' 00:24:25.944 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 439814 00:24:25.944 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 439814 ']' 00:24:25.944 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 439814 00:24:25.944 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:25.944 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:25.944 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 439814 00:24:25.944 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:25.944 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:25.944 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 439814' 00:24:25.944 killing process with pid 439814 00:24:25.944 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 439814 00:24:25.944 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 439814 00:24:25.944 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:25.944 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:25.944 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:25.944 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:24:25.944 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:24:25.944 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:26.205 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:24:26.205 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:26.205 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:26.205 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.205 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.205 20:44:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.111 20:44:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:28.111 00:24:28.111 real 0m11.772s 00:24:28.111 user 0m14.188s 00:24:28.111 sys 0m5.237s 00:24:28.111 20:44:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:28.111 20:44:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:28.111 ************************************ 00:24:28.111 END TEST nvmf_multicontroller 00:24:28.111 ************************************ 00:24:28.112 20:44:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:28.112 20:44:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:28.112 20:44:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:28.112 20:44:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.112 ************************************ 00:24:28.112 START TEST nvmf_aer 00:24:28.112 ************************************ 00:24:28.112 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:28.371 * Looking for test storage... 00:24:28.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:28.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.371 --rc genhtml_branch_coverage=1 00:24:28.371 --rc genhtml_function_coverage=1 00:24:28.371 --rc genhtml_legend=1 00:24:28.371 --rc geninfo_all_blocks=1 00:24:28.371 --rc geninfo_unexecuted_blocks=1 00:24:28.371 00:24:28.371 ' 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:28.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.371 --rc genhtml_branch_coverage=1 00:24:28.371 --rc genhtml_function_coverage=1 00:24:28.371 --rc genhtml_legend=1 00:24:28.371 --rc geninfo_all_blocks=1 00:24:28.371 --rc geninfo_unexecuted_blocks=1 00:24:28.371 00:24:28.371 ' 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:28.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.371 --rc genhtml_branch_coverage=1 00:24:28.371 --rc genhtml_function_coverage=1 00:24:28.371 --rc genhtml_legend=1 00:24:28.371 --rc geninfo_all_blocks=1 00:24:28.371 --rc geninfo_unexecuted_blocks=1 00:24:28.371 00:24:28.371 ' 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:28.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.371 --rc genhtml_branch_coverage=1 00:24:28.371 --rc genhtml_function_coverage=1 00:24:28.371 --rc genhtml_legend=1 00:24:28.371 --rc geninfo_all_blocks=1 00:24:28.371 --rc geninfo_unexecuted_blocks=1 00:24:28.371 00:24:28.371 ' 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.371 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:28.372 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:24:28.372 20:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:34.943 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:34.943 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:34.943 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:34.944 Found net devices under 0000:af:00.0: cvl_0_0 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:34.944 Found net devices under 0000:af:00.1: cvl_0_1 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:34.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:34.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.405 ms 00:24:34.944 00:24:34.944 --- 10.0.0.2 ping statistics --- 00:24:34.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.944 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:34.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:34.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:24:34.944 00:24:34.944 --- 10.0.0.1 ping statistics --- 00:24:34.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.944 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=444099 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 444099 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 444099 ']' 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:34.944 20:44:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:34.944 [2024-12-05 20:44:27.772743] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:24:34.944 [2024-12-05 20:44:27.772782] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.944 [2024-12-05 20:44:27.846385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:34.944 [2024-12-05 20:44:27.887022] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.944 [2024-12-05 20:44:27.887056] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.944 [2024-12-05 20:44:27.887066] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.944 [2024-12-05 20:44:27.887071] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.944 [2024-12-05 20:44:27.887076] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.944 [2024-12-05 20:44:27.888656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.944 [2024-12-05 20:44:27.888770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.944 [2024-12-05 20:44:27.888881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.944 [2024-12-05 20:44:27.888882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:35.204 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:35.204 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:24:35.204 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:35.204 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:35.204 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:35.204 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.204 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:35.204 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.204 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:35.204 [2024-12-05 20:44:28.627615] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.204 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.204 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:35.204 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.204 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:35.463 Malloc0 00:24:35.463 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.463 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:35.463 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.463 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:35.463 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.463 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:35.463 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.463 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:35.463 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.463 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:35.463 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.463 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:35.463 [2024-12-05 20:44:28.698986] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.463 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.463 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:35.463 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.463 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:35.463 [ 00:24:35.463 { 00:24:35.463 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:35.463 "subtype": "Discovery", 00:24:35.463 "listen_addresses": [], 00:24:35.463 "allow_any_host": true, 00:24:35.463 "hosts": [] 00:24:35.463 }, 00:24:35.463 { 00:24:35.463 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.463 "subtype": "NVMe", 00:24:35.463 "listen_addresses": [ 00:24:35.463 { 00:24:35.463 "trtype": "TCP", 00:24:35.463 "adrfam": "IPv4", 00:24:35.463 "traddr": "10.0.0.2", 00:24:35.463 "trsvcid": "4420" 00:24:35.463 } 00:24:35.463 ], 00:24:35.463 "allow_any_host": true, 00:24:35.463 "hosts": [], 00:24:35.463 "serial_number": "SPDK00000000000001", 00:24:35.463 "model_number": "SPDK bdev Controller", 00:24:35.463 "max_namespaces": 2, 00:24:35.463 "min_cntlid": 1, 00:24:35.463 "max_cntlid": 65519, 00:24:35.463 "namespaces": [ 00:24:35.463 { 00:24:35.463 "nsid": 1, 00:24:35.463 "bdev_name": "Malloc0", 00:24:35.463 "name": "Malloc0", 00:24:35.463 "nguid": "B211E6AA82C94706B4E9A81E786D47FF", 00:24:35.463 "uuid": "b211e6aa-82c9-4706-b4e9-a81e786d47ff" 00:24:35.463 } 00:24:35.463 ] 00:24:35.463 } 00:24:35.463 ] 00:24:35.463 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.463 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:35.463 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:35.463 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=444303 00:24:35.463 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:35.463 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:35.463 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:24:35.463 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:35.463 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:24:35.463 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:24:35.463 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:35.463 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:35.463 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:24:35.463 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:24:35.463 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:35.723 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:35.723 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:35.723 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:24:35.723 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:35.723 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.723 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:35.723 Malloc1 00:24:35.723 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.723 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:35.723 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.723 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:35.723 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.723 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:35.723 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.723 20:44:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:35.723 Asynchronous Event Request test 00:24:35.723 Attaching to 10.0.0.2 00:24:35.723 Attached to 10.0.0.2 00:24:35.723 Registering asynchronous event callbacks... 00:24:35.723 Starting namespace attribute notice tests for all controllers... 00:24:35.723 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:35.723 aer_cb - Changed Namespace 00:24:35.723 Cleaning up... 00:24:35.723 [ 00:24:35.723 { 00:24:35.723 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:35.723 "subtype": "Discovery", 00:24:35.723 "listen_addresses": [], 00:24:35.723 "allow_any_host": true, 00:24:35.723 "hosts": [] 00:24:35.723 }, 00:24:35.723 { 00:24:35.723 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.723 "subtype": "NVMe", 00:24:35.723 "listen_addresses": [ 00:24:35.723 { 00:24:35.723 "trtype": "TCP", 00:24:35.723 "adrfam": "IPv4", 00:24:35.723 "traddr": "10.0.0.2", 00:24:35.723 "trsvcid": "4420" 00:24:35.723 } 00:24:35.723 ], 00:24:35.723 "allow_any_host": true, 00:24:35.723 "hosts": [], 00:24:35.723 "serial_number": "SPDK00000000000001", 00:24:35.723 "model_number": "SPDK bdev Controller", 00:24:35.723 "max_namespaces": 2, 00:24:35.723 "min_cntlid": 1, 00:24:35.723 "max_cntlid": 65519, 00:24:35.723 "namespaces": [ 00:24:35.723 { 00:24:35.723 "nsid": 1, 00:24:35.723 "bdev_name": "Malloc0", 00:24:35.723 "name": "Malloc0", 00:24:35.723 "nguid": "B211E6AA82C94706B4E9A81E786D47FF", 00:24:35.723 "uuid": "b211e6aa-82c9-4706-b4e9-a81e786d47ff" 00:24:35.723 }, 00:24:35.723 { 00:24:35.723 "nsid": 2, 00:24:35.723 "bdev_name": "Malloc1", 00:24:35.723 "name": "Malloc1", 00:24:35.723 "nguid": "0E56EA711E594765872AEA798095912E", 00:24:35.723 "uuid": "0e56ea71-1e59-4765-872a-ea798095912e" 00:24:35.723 } 00:24:35.723 ] 00:24:35.723 } 00:24:35.723 ] 00:24:35.723 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.723 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 444303 00:24:35.723 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:35.723 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.723 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:35.723 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.723 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:35.724 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.724 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:35.724 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.724 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:35.724 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.724 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:35.724 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.724 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:35.724 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:35.724 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:35.724 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:24:35.724 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:35.724 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:24:35.724 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:35.724 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:35.724 rmmod nvme_tcp 00:24:35.724 rmmod nvme_fabrics 00:24:35.724 rmmod nvme_keyring 00:24:35.724 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:35.724 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:24:35.724 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:24:35.724 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 444099 ']' 00:24:35.724 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 444099 00:24:35.724 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 444099 ']' 00:24:35.724 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 444099 00:24:35.724 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:24:35.724 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:35.724 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 444099 00:24:35.983 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:35.983 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:35.983 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 444099' 00:24:35.983 killing process with pid 444099 00:24:35.983 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 444099 00:24:35.983 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 444099 00:24:35.983 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:35.983 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:35.983 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:35.983 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:24:35.983 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:35.983 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:24:35.983 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:24:35.983 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:35.984 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:35.984 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.984 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:35.984 20:44:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.520 20:44:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:38.520 00:24:38.520 real 0m9.892s 00:24:38.520 user 0m7.751s 00:24:38.520 sys 0m4.868s 00:24:38.520 20:44:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:38.520 20:44:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.520 ************************************ 00:24:38.520 END TEST nvmf_aer 00:24:38.520 ************************************ 00:24:38.520 20:44:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:38.520 20:44:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:38.520 20:44:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:38.520 20:44:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.520 ************************************ 00:24:38.520 START TEST nvmf_async_init 00:24:38.520 ************************************ 00:24:38.520 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:38.520 * Looking for test storage... 00:24:38.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:38.520 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:38.520 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:24:38.520 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:38.520 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:38.520 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:38.520 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:38.520 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:38.520 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:24:38.520 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:24:38.520 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:24:38.520 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:24:38.520 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:24:38.520 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:24:38.520 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:24:38.520 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:38.520 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:24:38.520 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:24:38.520 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:38.520 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:38.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.521 --rc genhtml_branch_coverage=1 00:24:38.521 --rc genhtml_function_coverage=1 00:24:38.521 --rc genhtml_legend=1 00:24:38.521 --rc geninfo_all_blocks=1 00:24:38.521 --rc geninfo_unexecuted_blocks=1 00:24:38.521 00:24:38.521 ' 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:38.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.521 --rc genhtml_branch_coverage=1 00:24:38.521 --rc genhtml_function_coverage=1 00:24:38.521 --rc genhtml_legend=1 00:24:38.521 --rc geninfo_all_blocks=1 00:24:38.521 --rc geninfo_unexecuted_blocks=1 00:24:38.521 00:24:38.521 ' 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:38.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.521 --rc genhtml_branch_coverage=1 00:24:38.521 --rc genhtml_function_coverage=1 00:24:38.521 --rc genhtml_legend=1 00:24:38.521 --rc geninfo_all_blocks=1 00:24:38.521 --rc geninfo_unexecuted_blocks=1 00:24:38.521 00:24:38.521 ' 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:38.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.521 --rc genhtml_branch_coverage=1 00:24:38.521 --rc genhtml_function_coverage=1 00:24:38.521 --rc genhtml_legend=1 00:24:38.521 --rc geninfo_all_blocks=1 00:24:38.521 --rc geninfo_unexecuted_blocks=1 00:24:38.521 00:24:38.521 ' 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:38.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=0a4df96f4fe5400c8c07c6cb237f993c 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:38.521 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.522 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:38.522 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.522 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:38.522 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:38.522 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:24:38.522 20:44:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:45.084 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:45.084 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:45.084 Found net devices under 0000:af:00.0: cvl_0_0 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:45.084 Found net devices under 0000:af:00.1: cvl_0_1 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:45.084 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:45.085 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:45.085 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:45.085 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:45.085 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:45.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:45.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:24:45.085 00:24:45.085 --- 10.0.0.2 ping statistics --- 00:24:45.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.085 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:24:45.085 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:45.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:45.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:24:45.085 00:24:45.085 --- 10.0.0.1 ping statistics --- 00:24:45.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.085 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:24:45.085 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:45.085 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:24:45.085 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:45.085 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:45.085 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:45.085 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:45.085 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:45.085 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:45.085 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:45.085 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:45.085 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:45.085 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:45.085 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:45.085 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=447976 00:24:45.085 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 447976 00:24:45.085 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:45.085 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 447976 ']' 00:24:45.085 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.085 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:45.085 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.085 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:45.085 20:44:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:45.085 [2024-12-05 20:44:37.719510] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:24:45.085 [2024-12-05 20:44:37.719552] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.085 [2024-12-05 20:44:37.793175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.085 [2024-12-05 20:44:37.832069] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.085 [2024-12-05 20:44:37.832104] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.085 [2024-12-05 20:44:37.832111] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:45.085 [2024-12-05 20:44:37.832117] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:45.085 [2024-12-05 20:44:37.832121] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.085 [2024-12-05 20:44:37.832675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:45.344 [2024-12-05 20:44:38.569716] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:45.344 null0 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 0a4df96f4fe5400c8c07c6cb237f993c 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:45.344 [2024-12-05 20:44:38.621973] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.344 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:45.604 nvme0n1 00:24:45.604 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.604 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:45.604 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.604 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:45.604 [ 00:24:45.604 { 00:24:45.604 "name": "nvme0n1", 00:24:45.604 "aliases": [ 00:24:45.604 "0a4df96f-4fe5-400c-8c07-c6cb237f993c" 00:24:45.604 ], 00:24:45.604 "product_name": "NVMe disk", 00:24:45.604 "block_size": 512, 00:24:45.604 "num_blocks": 2097152, 00:24:45.604 "uuid": "0a4df96f-4fe5-400c-8c07-c6cb237f993c", 00:24:45.604 "numa_id": 1, 00:24:45.604 "assigned_rate_limits": { 00:24:45.604 "rw_ios_per_sec": 0, 00:24:45.604 "rw_mbytes_per_sec": 0, 00:24:45.604 "r_mbytes_per_sec": 0, 00:24:45.604 "w_mbytes_per_sec": 0 00:24:45.604 }, 00:24:45.604 "claimed": false, 00:24:45.604 "zoned": false, 00:24:45.604 "supported_io_types": { 00:24:45.604 "read": true, 00:24:45.604 "write": true, 00:24:45.604 "unmap": false, 00:24:45.604 "flush": true, 00:24:45.604 "reset": true, 00:24:45.604 "nvme_admin": true, 00:24:45.604 "nvme_io": true, 00:24:45.604 "nvme_io_md": false, 00:24:45.604 "write_zeroes": true, 00:24:45.604 "zcopy": false, 00:24:45.604 "get_zone_info": false, 00:24:45.604 "zone_management": false, 00:24:45.604 "zone_append": false, 00:24:45.604 "compare": true, 00:24:45.604 "compare_and_write": true, 00:24:45.604 "abort": true, 00:24:45.604 "seek_hole": false, 00:24:45.604 "seek_data": false, 00:24:45.604 "copy": true, 00:24:45.604 "nvme_iov_md": false 00:24:45.604 }, 00:24:45.604 "memory_domains": [ 00:24:45.604 { 00:24:45.604 "dma_device_id": "system", 00:24:45.604 "dma_device_type": 1 00:24:45.604 } 00:24:45.604 ], 00:24:45.604 "driver_specific": { 00:24:45.604 "nvme": [ 00:24:45.604 { 00:24:45.604 "trid": { 00:24:45.604 "trtype": "TCP", 00:24:45.604 "adrfam": "IPv4", 00:24:45.604 "traddr": "10.0.0.2", 00:24:45.604 "trsvcid": "4420", 00:24:45.604 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:45.604 }, 00:24:45.604 "ctrlr_data": { 00:24:45.604 "cntlid": 1, 00:24:45.604 "vendor_id": "0x8086", 00:24:45.604 "model_number": "SPDK bdev Controller", 00:24:45.604 "serial_number": "00000000000000000000", 00:24:45.604 "firmware_revision": "25.01", 00:24:45.604 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:45.604 "oacs": { 00:24:45.604 "security": 0, 00:24:45.604 "format": 0, 00:24:45.604 "firmware": 0, 00:24:45.604 "ns_manage": 0 00:24:45.604 }, 00:24:45.604 "multi_ctrlr": true, 00:24:45.604 "ana_reporting": false 00:24:45.604 }, 00:24:45.604 "vs": { 00:24:45.604 "nvme_version": "1.3" 00:24:45.604 }, 00:24:45.604 "ns_data": { 00:24:45.604 "id": 1, 00:24:45.604 "can_share": true 00:24:45.604 } 00:24:45.604 } 00:24:45.604 ], 00:24:45.604 "mp_policy": "active_passive" 00:24:45.604 } 00:24:45.604 } 00:24:45.604 ] 00:24:45.604 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.604 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:45.604 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.604 20:44:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:45.604 [2024-12-05 20:44:38.886496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:45.604 [2024-12-05 20:44:38.886547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96ba80 (9): Bad file descriptor 00:24:45.604 [2024-12-05 20:44:39.018139] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:45.604 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.604 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:45.604 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.604 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:45.604 [ 00:24:45.604 { 00:24:45.604 "name": "nvme0n1", 00:24:45.604 "aliases": [ 00:24:45.604 "0a4df96f-4fe5-400c-8c07-c6cb237f993c" 00:24:45.604 ], 00:24:45.604 "product_name": "NVMe disk", 00:24:45.604 "block_size": 512, 00:24:45.604 "num_blocks": 2097152, 00:24:45.604 "uuid": "0a4df96f-4fe5-400c-8c07-c6cb237f993c", 00:24:45.604 "numa_id": 1, 00:24:45.604 "assigned_rate_limits": { 00:24:45.604 "rw_ios_per_sec": 0, 00:24:45.604 "rw_mbytes_per_sec": 0, 00:24:45.604 "r_mbytes_per_sec": 0, 00:24:45.604 "w_mbytes_per_sec": 0 00:24:45.604 }, 00:24:45.604 "claimed": false, 00:24:45.604 "zoned": false, 00:24:45.604 "supported_io_types": { 00:24:45.604 "read": true, 00:24:45.604 "write": true, 00:24:45.604 "unmap": false, 00:24:45.604 "flush": true, 00:24:45.604 "reset": true, 00:24:45.604 "nvme_admin": true, 00:24:45.604 "nvme_io": true, 00:24:45.604 "nvme_io_md": false, 00:24:45.604 "write_zeroes": true, 00:24:45.604 "zcopy": false, 00:24:45.604 "get_zone_info": false, 00:24:45.604 "zone_management": false, 00:24:45.604 "zone_append": false, 00:24:45.604 "compare": true, 00:24:45.604 "compare_and_write": true, 00:24:45.604 "abort": true, 00:24:45.604 "seek_hole": false, 00:24:45.604 "seek_data": false, 00:24:45.604 "copy": true, 00:24:45.604 "nvme_iov_md": false 00:24:45.604 }, 00:24:45.604 "memory_domains": [ 00:24:45.604 { 00:24:45.604 "dma_device_id": "system", 00:24:45.604 "dma_device_type": 1 00:24:45.605 } 00:24:45.605 ], 00:24:45.605 "driver_specific": { 00:24:45.605 "nvme": [ 00:24:45.605 { 00:24:45.605 "trid": { 00:24:45.605 "trtype": "TCP", 00:24:45.605 "adrfam": "IPv4", 00:24:45.605 "traddr": "10.0.0.2", 00:24:45.605 "trsvcid": "4420", 00:24:45.605 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:45.605 }, 00:24:45.605 "ctrlr_data": { 00:24:45.605 "cntlid": 2, 00:24:45.605 "vendor_id": "0x8086", 00:24:45.605 "model_number": "SPDK bdev Controller", 00:24:45.605 "serial_number": "00000000000000000000", 00:24:45.605 "firmware_revision": "25.01", 00:24:45.605 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:45.605 "oacs": { 00:24:45.605 "security": 0, 00:24:45.605 "format": 0, 00:24:45.605 "firmware": 0, 00:24:45.605 "ns_manage": 0 00:24:45.605 }, 00:24:45.605 "multi_ctrlr": true, 00:24:45.605 "ana_reporting": false 00:24:45.605 }, 00:24:45.605 "vs": { 00:24:45.605 "nvme_version": "1.3" 00:24:45.605 }, 00:24:45.605 "ns_data": { 00:24:45.605 "id": 1, 00:24:45.605 "can_share": true 00:24:45.605 } 00:24:45.605 } 00:24:45.605 ], 00:24:45.605 "mp_policy": "active_passive" 00:24:45.605 } 00:24:45.605 } 00:24:45.605 ] 00:24:45.605 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.605 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.605 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.605 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:45.864 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.864 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:45.864 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.TKXhAPNglA 00:24:45.864 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:45.864 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.TKXhAPNglA 00:24:45.864 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.TKXhAPNglA 00:24:45.864 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.864 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:45.864 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.864 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:45.864 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.864 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:45.864 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.864 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:45.864 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.864 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:45.864 [2024-12-05 20:44:39.091107] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:45.864 [2024-12-05 20:44:39.091202] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:45.864 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.864 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:45.864 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.864 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:45.864 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.864 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:45.864 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.864 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:45.864 [2024-12-05 20:44:39.111166] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:45.864 nvme0n1 00:24:45.864 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.864 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:45.865 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.865 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:45.865 [ 00:24:45.865 { 00:24:45.865 "name": "nvme0n1", 00:24:45.865 "aliases": [ 00:24:45.865 "0a4df96f-4fe5-400c-8c07-c6cb237f993c" 00:24:45.865 ], 00:24:45.865 "product_name": "NVMe disk", 00:24:45.865 "block_size": 512, 00:24:45.865 "num_blocks": 2097152, 00:24:45.865 "uuid": "0a4df96f-4fe5-400c-8c07-c6cb237f993c", 00:24:45.865 "numa_id": 1, 00:24:45.865 "assigned_rate_limits": { 00:24:45.865 "rw_ios_per_sec": 0, 00:24:45.865 "rw_mbytes_per_sec": 0, 00:24:45.865 "r_mbytes_per_sec": 0, 00:24:45.865 "w_mbytes_per_sec": 0 00:24:45.865 }, 00:24:45.865 "claimed": false, 00:24:45.865 "zoned": false, 00:24:45.865 "supported_io_types": { 00:24:45.865 "read": true, 00:24:45.865 "write": true, 00:24:45.865 "unmap": false, 00:24:45.865 "flush": true, 00:24:45.865 "reset": true, 00:24:45.865 "nvme_admin": true, 00:24:45.865 "nvme_io": true, 00:24:45.865 "nvme_io_md": false, 00:24:45.865 "write_zeroes": true, 00:24:45.865 "zcopy": false, 00:24:45.865 "get_zone_info": false, 00:24:45.865 "zone_management": false, 00:24:45.865 "zone_append": false, 00:24:45.865 "compare": true, 00:24:45.865 "compare_and_write": true, 00:24:45.865 "abort": true, 00:24:45.865 "seek_hole": false, 00:24:45.865 "seek_data": false, 00:24:45.865 "copy": true, 00:24:45.865 "nvme_iov_md": false 00:24:45.865 }, 00:24:45.865 "memory_domains": [ 00:24:45.865 { 00:24:45.865 "dma_device_id": "system", 00:24:45.865 "dma_device_type": 1 00:24:45.865 } 00:24:45.865 ], 00:24:45.865 "driver_specific": { 00:24:45.865 "nvme": [ 00:24:45.865 { 00:24:45.865 "trid": { 00:24:45.865 "trtype": "TCP", 00:24:45.865 "adrfam": "IPv4", 00:24:45.865 "traddr": "10.0.0.2", 00:24:45.865 "trsvcid": "4421", 00:24:45.865 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:45.865 }, 00:24:45.865 "ctrlr_data": { 00:24:45.865 "cntlid": 3, 00:24:45.865 "vendor_id": "0x8086", 00:24:45.865 "model_number": "SPDK bdev Controller", 00:24:45.865 "serial_number": "00000000000000000000", 00:24:45.865 "firmware_revision": "25.01", 00:24:45.865 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:45.865 "oacs": { 00:24:45.865 "security": 0, 00:24:45.865 "format": 0, 00:24:45.865 "firmware": 0, 00:24:45.865 "ns_manage": 0 00:24:45.865 }, 00:24:45.865 "multi_ctrlr": true, 00:24:45.865 "ana_reporting": false 00:24:45.865 }, 00:24:45.865 "vs": { 00:24:45.865 "nvme_version": "1.3" 00:24:45.865 }, 00:24:45.865 "ns_data": { 00:24:45.865 "id": 1, 00:24:45.865 "can_share": true 00:24:45.865 } 00:24:45.865 } 00:24:45.865 ], 00:24:45.865 "mp_policy": "active_passive" 00:24:45.865 } 00:24:45.865 } 00:24:45.865 ] 00:24:45.865 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.865 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.865 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.865 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:45.865 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.865 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.TKXhAPNglA 00:24:45.865 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:45.865 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:45.865 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:45.865 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:24:45.865 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:45.865 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:24:45.865 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:45.865 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:45.865 rmmod nvme_tcp 00:24:45.865 rmmod nvme_fabrics 00:24:45.865 rmmod nvme_keyring 00:24:45.865 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:45.865 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:24:45.865 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:24:45.865 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 447976 ']' 00:24:45.865 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 447976 00:24:45.865 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 447976 ']' 00:24:45.865 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 447976 00:24:45.865 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:24:45.865 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:45.865 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 447976 00:24:46.124 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:46.124 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:46.124 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 447976' 00:24:46.124 killing process with pid 447976 00:24:46.124 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 447976 00:24:46.124 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 447976 00:24:46.124 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:46.124 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:46.124 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:46.124 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:24:46.124 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:24:46.124 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:46.124 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:24:46.124 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:46.124 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:46.124 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.124 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:46.124 20:44:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:48.672 00:24:48.672 real 0m10.077s 00:24:48.672 user 0m3.868s 00:24:48.672 sys 0m4.793s 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:48.672 ************************************ 00:24:48.672 END TEST nvmf_async_init 00:24:48.672 ************************************ 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.672 ************************************ 00:24:48.672 START TEST dma 00:24:48.672 ************************************ 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:48.672 * Looking for test storage... 00:24:48.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:48.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.672 --rc genhtml_branch_coverage=1 00:24:48.672 --rc genhtml_function_coverage=1 00:24:48.672 --rc genhtml_legend=1 00:24:48.672 --rc geninfo_all_blocks=1 00:24:48.672 --rc geninfo_unexecuted_blocks=1 00:24:48.672 00:24:48.672 ' 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:48.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.672 --rc genhtml_branch_coverage=1 00:24:48.672 --rc genhtml_function_coverage=1 00:24:48.672 --rc genhtml_legend=1 00:24:48.672 --rc geninfo_all_blocks=1 00:24:48.672 --rc geninfo_unexecuted_blocks=1 00:24:48.672 00:24:48.672 ' 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:48.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.672 --rc genhtml_branch_coverage=1 00:24:48.672 --rc genhtml_function_coverage=1 00:24:48.672 --rc genhtml_legend=1 00:24:48.672 --rc geninfo_all_blocks=1 00:24:48.672 --rc geninfo_unexecuted_blocks=1 00:24:48.672 00:24:48.672 ' 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:48.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.672 --rc genhtml_branch_coverage=1 00:24:48.672 --rc genhtml_function_coverage=1 00:24:48.672 --rc genhtml_legend=1 00:24:48.672 --rc geninfo_all_blocks=1 00:24:48.672 --rc geninfo_unexecuted_blocks=1 00:24:48.672 00:24:48.672 ' 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:48.672 20:44:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:48.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:48.673 00:24:48.673 real 0m0.202s 00:24:48.673 user 0m0.122s 00:24:48.673 sys 0m0.094s 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:48.673 ************************************ 00:24:48.673 END TEST dma 00:24:48.673 ************************************ 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.673 ************************************ 00:24:48.673 START TEST nvmf_identify 00:24:48.673 ************************************ 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:48.673 * Looking for test storage... 00:24:48.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:24:48.673 20:44:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:48.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.673 --rc genhtml_branch_coverage=1 00:24:48.673 --rc genhtml_function_coverage=1 00:24:48.673 --rc genhtml_legend=1 00:24:48.673 --rc geninfo_all_blocks=1 00:24:48.673 --rc geninfo_unexecuted_blocks=1 00:24:48.673 00:24:48.673 ' 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:48.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.673 --rc genhtml_branch_coverage=1 00:24:48.673 --rc genhtml_function_coverage=1 00:24:48.673 --rc genhtml_legend=1 00:24:48.673 --rc geninfo_all_blocks=1 00:24:48.673 --rc geninfo_unexecuted_blocks=1 00:24:48.673 00:24:48.673 ' 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:48.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.673 --rc genhtml_branch_coverage=1 00:24:48.673 --rc genhtml_function_coverage=1 00:24:48.673 --rc genhtml_legend=1 00:24:48.673 --rc geninfo_all_blocks=1 00:24:48.673 --rc geninfo_unexecuted_blocks=1 00:24:48.673 00:24:48.673 ' 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:48.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.673 --rc genhtml_branch_coverage=1 00:24:48.673 --rc genhtml_function_coverage=1 00:24:48.673 --rc genhtml_legend=1 00:24:48.673 --rc geninfo_all_blocks=1 00:24:48.673 --rc geninfo_unexecuted_blocks=1 00:24:48.673 00:24:48.673 ' 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:48.673 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:48.674 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:48.674 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:48.674 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:48.674 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:48.674 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:48.674 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:48.674 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:48.674 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:48.674 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:48.674 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:48.674 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:48.674 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:24:48.674 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:48.674 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:48.674 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:48.674 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.674 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.674 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.674 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:48.674 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.674 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:24:48.674 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:48.674 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:48.674 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:48.674 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:48.674 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:48.674 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:48.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:48.674 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:48.674 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:48.674 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:48.933 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:48.933 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:48.933 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:48.933 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:48.933 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:48.933 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:48.933 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:48.933 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:48.933 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.933 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:48.933 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.933 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:48.933 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:48.933 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:24:48.933 20:44:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:55.503 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:55.503 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:55.503 Found net devices under 0000:af:00.0: cvl_0_0 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:55.503 Found net devices under 0000:af:00.1: cvl_0_1 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:55.503 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:55.504 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:55.504 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:55.504 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:55.504 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:55.504 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:55.504 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:55.504 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:55.504 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:55.504 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:55.504 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:55.504 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:55.504 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:55.504 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:55.504 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:55.504 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:55.504 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:55.504 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:55.504 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:55.504 20:44:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:55.504 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:55.504 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:55.504 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:55.504 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:55.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:55.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:24:55.504 00:24:55.504 --- 10.0.0.2 ping statistics --- 00:24:55.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.504 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:24:55.504 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:55.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:55.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:24:55.504 00:24:55.504 --- 10.0.0.1 ping statistics --- 00:24:55.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.504 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:24:55.504 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:55.504 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:24:55.504 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:55.504 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:55.504 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:55.504 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:55.504 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:55.504 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:55.504 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:55.504 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:55.504 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:55.504 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:55.504 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=451961 00:24:55.504 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:55.504 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:55.504 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 451961 00:24:55.504 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 451961 ']' 00:24:55.504 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.504 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:55.504 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.504 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:55.504 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:55.504 [2024-12-05 20:44:48.171976] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:24:55.504 [2024-12-05 20:44:48.172014] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:55.504 [2024-12-05 20:44:48.251110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:55.504 [2024-12-05 20:44:48.293359] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:55.504 [2024-12-05 20:44:48.293395] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:55.504 [2024-12-05 20:44:48.293402] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:55.504 [2024-12-05 20:44:48.293407] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:55.504 [2024-12-05 20:44:48.293412] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:55.504 [2024-12-05 20:44:48.294840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.504 [2024-12-05 20:44:48.294959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.504 [2024-12-05 20:44:48.295065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.504 [2024-12-05 20:44:48.295078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:55.764 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:55.764 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:24:55.764 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:55.764 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.764 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:55.764 [2024-12-05 20:44:48.989615] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:55.764 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.764 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:55.764 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:55.764 20:44:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:55.764 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:55.764 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.764 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:55.764 Malloc0 00:24:55.764 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.764 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:55.764 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.764 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:55.764 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.764 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:55.764 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.764 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:55.764 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.764 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:55.764 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.764 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:55.764 [2024-12-05 20:44:49.080584] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:55.764 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.764 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:55.764 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.764 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:55.764 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.764 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:55.764 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.764 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:55.764 [ 00:24:55.764 { 00:24:55.764 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:55.764 "subtype": "Discovery", 00:24:55.764 "listen_addresses": [ 00:24:55.764 { 00:24:55.764 "trtype": "TCP", 00:24:55.764 "adrfam": "IPv4", 00:24:55.764 "traddr": "10.0.0.2", 00:24:55.764 "trsvcid": "4420" 00:24:55.764 } 00:24:55.764 ], 00:24:55.764 "allow_any_host": true, 00:24:55.764 "hosts": [] 00:24:55.764 }, 00:24:55.764 { 00:24:55.764 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:55.764 "subtype": "NVMe", 00:24:55.764 "listen_addresses": [ 00:24:55.764 { 00:24:55.764 "trtype": "TCP", 00:24:55.764 "adrfam": "IPv4", 00:24:55.764 "traddr": "10.0.0.2", 00:24:55.764 "trsvcid": "4420" 00:24:55.764 } 00:24:55.764 ], 00:24:55.764 "allow_any_host": true, 00:24:55.764 "hosts": [], 00:24:55.764 "serial_number": "SPDK00000000000001", 00:24:55.764 "model_number": "SPDK bdev Controller", 00:24:55.764 "max_namespaces": 32, 00:24:55.764 "min_cntlid": 1, 00:24:55.764 "max_cntlid": 65519, 00:24:55.764 "namespaces": [ 00:24:55.764 { 00:24:55.764 "nsid": 1, 00:24:55.764 "bdev_name": "Malloc0", 00:24:55.764 "name": "Malloc0", 00:24:55.764 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:55.764 "eui64": "ABCDEF0123456789", 00:24:55.764 "uuid": "20c054bf-b6d9-4385-8c71-15fc5459555e" 00:24:55.764 } 00:24:55.764 ] 00:24:55.764 } 00:24:55.764 ] 00:24:55.764 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.764 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:55.764 [2024-12-05 20:44:49.133853] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:24:55.764 [2024-12-05 20:44:49.133900] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid452206 ] 00:24:55.764 [2024-12-05 20:44:49.171360] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:55.764 [2024-12-05 20:44:49.171404] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:55.764 [2024-12-05 20:44:49.171408] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:55.764 [2024-12-05 20:44:49.171421] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:55.764 [2024-12-05 20:44:49.171429] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:55.764 [2024-12-05 20:44:49.175347] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:55.764 [2024-12-05 20:44:49.175382] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2180550 0 00:24:55.764 [2024-12-05 20:44:49.183071] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:55.764 [2024-12-05 20:44:49.183086] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:55.764 [2024-12-05 20:44:49.183090] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:55.764 [2024-12-05 20:44:49.183092] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:55.764 [2024-12-05 20:44:49.183126] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:55.764 [2024-12-05 20:44:49.183131] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:55.764 [2024-12-05 20:44:49.183134] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2180550) 00:24:55.764 [2024-12-05 20:44:49.183146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:55.764 [2024-12-05 20:44:49.183164] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2100, cid 0, qid 0 00:24:55.764 [2024-12-05 20:44:49.191066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:55.764 [2024-12-05 20:44:49.191073] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:55.764 [2024-12-05 20:44:49.191076] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:55.764 [2024-12-05 20:44:49.191080] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2100) on tqpair=0x2180550 00:24:55.764 [2024-12-05 20:44:49.191091] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:55.764 [2024-12-05 20:44:49.191098] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:55.764 [2024-12-05 20:44:49.191102] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:55.764 [2024-12-05 20:44:49.191113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:55.764 [2024-12-05 20:44:49.191119] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:55.764 [2024-12-05 20:44:49.191122] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2180550) 00:24:55.764 [2024-12-05 20:44:49.191128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.764 [2024-12-05 20:44:49.191139] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2100, cid 0, qid 0 00:24:55.764 [2024-12-05 20:44:49.191312] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:55.764 [2024-12-05 20:44:49.191317] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:55.764 [2024-12-05 20:44:49.191320] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:55.764 [2024-12-05 20:44:49.191323] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2100) on tqpair=0x2180550 00:24:55.764 [2024-12-05 20:44:49.191327] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:55.764 [2024-12-05 20:44:49.191333] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:55.765 [2024-12-05 20:44:49.191338] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:55.765 [2024-12-05 20:44:49.191341] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:55.765 [2024-12-05 20:44:49.191344] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2180550) 00:24:55.765 [2024-12-05 20:44:49.191349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.765 [2024-12-05 20:44:49.191359] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2100, cid 0, qid 0 00:24:55.765 [2024-12-05 20:44:49.191418] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:55.765 [2024-12-05 20:44:49.191423] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:55.765 [2024-12-05 20:44:49.191426] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:55.765 [2024-12-05 20:44:49.191429] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2100) on tqpair=0x2180550 00:24:55.765 [2024-12-05 20:44:49.191434] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:55.765 [2024-12-05 20:44:49.191440] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:55.765 [2024-12-05 20:44:49.191446] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:55.765 [2024-12-05 20:44:49.191449] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:55.765 [2024-12-05 20:44:49.191451] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2180550) 00:24:55.765 [2024-12-05 20:44:49.191457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.765 [2024-12-05 20:44:49.191465] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2100, cid 0, qid 0 00:24:55.765 [2024-12-05 20:44:49.191527] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:55.765 [2024-12-05 20:44:49.191532] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:55.765 [2024-12-05 20:44:49.191535] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:55.765 [2024-12-05 20:44:49.191538] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2100) on tqpair=0x2180550 00:24:55.765 [2024-12-05 20:44:49.191542] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:55.765 [2024-12-05 20:44:49.191549] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:55.765 [2024-12-05 20:44:49.191552] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:55.765 [2024-12-05 20:44:49.191555] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2180550) 00:24:55.765 [2024-12-05 20:44:49.191560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.765 [2024-12-05 20:44:49.191572] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2100, cid 0, qid 0 00:24:55.765 [2024-12-05 20:44:49.191629] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:55.765 [2024-12-05 20:44:49.191634] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:55.765 [2024-12-05 20:44:49.191637] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:55.765 [2024-12-05 20:44:49.191640] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2100) on tqpair=0x2180550 00:24:55.765 [2024-12-05 20:44:49.191644] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:55.765 [2024-12-05 20:44:49.191648] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:55.765 [2024-12-05 20:44:49.191655] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:55.765 [2024-12-05 20:44:49.191762] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:55.765 [2024-12-05 20:44:49.191766] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:55.765 [2024-12-05 20:44:49.191774] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:55.765 [2024-12-05 20:44:49.191776] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:55.765 [2024-12-05 20:44:49.191779] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2180550) 00:24:55.765 [2024-12-05 20:44:49.191784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.765 [2024-12-05 20:44:49.191793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2100, cid 0, qid 0 00:24:55.765 [2024-12-05 20:44:49.191853] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:55.765 [2024-12-05 20:44:49.191858] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:55.765 [2024-12-05 20:44:49.191861] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:55.765 [2024-12-05 20:44:49.191864] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2100) on tqpair=0x2180550 00:24:55.765 [2024-12-05 20:44:49.191868] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:55.765 [2024-12-05 20:44:49.191875] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:55.765 [2024-12-05 20:44:49.191878] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:55.765 [2024-12-05 20:44:49.191881] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2180550) 00:24:55.765 [2024-12-05 20:44:49.191886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.765 [2024-12-05 20:44:49.191894] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2100, cid 0, qid 0 00:24:55.765 [2024-12-05 20:44:49.191950] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:55.765 [2024-12-05 20:44:49.191955] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:55.765 [2024-12-05 20:44:49.191958] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:55.765 [2024-12-05 20:44:49.191961] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2100) on tqpair=0x2180550 00:24:55.765 [2024-12-05 20:44:49.191965] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:55.765 [2024-12-05 20:44:49.191969] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:55.765 [2024-12-05 20:44:49.191977] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:55.765 [2024-12-05 20:44:49.191988] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:55.765 [2024-12-05 20:44:49.191995] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:55.765 [2024-12-05 20:44:49.191998] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2180550) 00:24:55.765 [2024-12-05 20:44:49.192003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.765 [2024-12-05 20:44:49.192012] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2100, cid 0, qid 0 00:24:55.765 [2024-12-05 20:44:49.192097] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:55.765 [2024-12-05 20:44:49.192103] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:55.765 [2024-12-05 20:44:49.192106] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:55.765 [2024-12-05 20:44:49.192109] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2180550): datao=0, datal=4096, cccid=0 00:24:55.765 [2024-12-05 20:44:49.192113] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21e2100) on tqpair(0x2180550): expected_datao=0, payload_size=4096 00:24:55.765 [2024-12-05 20:44:49.192117] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:55.765 [2024-12-05 20:44:49.192129] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:55.765 [2024-12-05 20:44:49.192133] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:56.026 [2024-12-05 20:44:49.233199] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.026 [2024-12-05 20:44:49.233211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.026 [2024-12-05 20:44:49.233214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.026 [2024-12-05 20:44:49.233217] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2100) on tqpair=0x2180550 00:24:56.026 [2024-12-05 20:44:49.233225] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:56.026 [2024-12-05 20:44:49.233229] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:56.026 [2024-12-05 20:44:49.233236] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:56.026 [2024-12-05 20:44:49.233241] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:56.026 [2024-12-05 20:44:49.233246] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:56.026 [2024-12-05 20:44:49.233250] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:56.026 [2024-12-05 20:44:49.233258] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:56.026 [2024-12-05 20:44:49.233264] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.026 [2024-12-05 20:44:49.233268] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.026 [2024-12-05 20:44:49.233271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2180550) 00:24:56.026 [2024-12-05 20:44:49.233278] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:56.026 [2024-12-05 20:44:49.233289] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2100, cid 0, qid 0 00:24:56.026 [2024-12-05 20:44:49.233357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.026 [2024-12-05 20:44:49.233362] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.026 [2024-12-05 20:44:49.233365] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.026 [2024-12-05 20:44:49.233370] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2100) on tqpair=0x2180550 00:24:56.026 [2024-12-05 20:44:49.233376] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.026 [2024-12-05 20:44:49.233379] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.026 [2024-12-05 20:44:49.233382] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2180550) 00:24:56.026 [2024-12-05 20:44:49.233387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.026 [2024-12-05 20:44:49.233391] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.026 [2024-12-05 20:44:49.233394] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.026 [2024-12-05 20:44:49.233397] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2180550) 00:24:56.026 [2024-12-05 20:44:49.233401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.026 [2024-12-05 20:44:49.233406] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.026 [2024-12-05 20:44:49.233409] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.026 [2024-12-05 20:44:49.233412] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2180550) 00:24:56.026 [2024-12-05 20:44:49.233416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.026 [2024-12-05 20:44:49.233421] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.026 [2024-12-05 20:44:49.233424] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.026 [2024-12-05 20:44:49.233426] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2180550) 00:24:56.026 [2024-12-05 20:44:49.233431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.026 [2024-12-05 20:44:49.233435] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:56.026 [2024-12-05 20:44:49.233444] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:56.026 [2024-12-05 20:44:49.233449] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.026 [2024-12-05 20:44:49.233452] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2180550) 00:24:56.026 [2024-12-05 20:44:49.233458] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.026 [2024-12-05 20:44:49.233468] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2100, cid 0, qid 0 00:24:56.026 [2024-12-05 20:44:49.233472] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2280, cid 1, qid 0 00:24:56.026 [2024-12-05 20:44:49.233475] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2400, cid 2, qid 0 00:24:56.026 [2024-12-05 20:44:49.233479] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2580, cid 3, qid 0 00:24:56.026 [2024-12-05 20:44:49.233483] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2700, cid 4, qid 0 00:24:56.026 [2024-12-05 20:44:49.233575] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.026 [2024-12-05 20:44:49.233580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.026 [2024-12-05 20:44:49.233582] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.026 [2024-12-05 20:44:49.233586] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2700) on tqpair=0x2180550 00:24:56.026 [2024-12-05 20:44:49.233590] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:56.026 [2024-12-05 20:44:49.233594] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:56.026 [2024-12-05 20:44:49.233604] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.026 [2024-12-05 20:44:49.233608] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2180550) 00:24:56.026 [2024-12-05 20:44:49.233613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.026 [2024-12-05 20:44:49.233621] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2700, cid 4, qid 0 00:24:56.026 [2024-12-05 20:44:49.233683] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:56.026 [2024-12-05 20:44:49.233688] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:56.026 [2024-12-05 20:44:49.233691] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:56.026 [2024-12-05 20:44:49.233694] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2180550): datao=0, datal=4096, cccid=4 00:24:56.027 [2024-12-05 20:44:49.233698] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21e2700) on tqpair(0x2180550): expected_datao=0, payload_size=4096 00:24:56.027 [2024-12-05 20:44:49.233701] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.027 [2024-12-05 20:44:49.233719] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:56.027 [2024-12-05 20:44:49.233723] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:56.027 [2024-12-05 20:44:49.233755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.027 [2024-12-05 20:44:49.233760] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.027 [2024-12-05 20:44:49.233763] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.027 [2024-12-05 20:44:49.233766] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2700) on tqpair=0x2180550 00:24:56.027 [2024-12-05 20:44:49.233776] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:56.027 [2024-12-05 20:44:49.233796] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.027 [2024-12-05 20:44:49.233800] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2180550) 00:24:56.027 [2024-12-05 20:44:49.233805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.027 [2024-12-05 20:44:49.233811] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.027 [2024-12-05 20:44:49.233814] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.027 [2024-12-05 20:44:49.233816] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2180550) 00:24:56.027 [2024-12-05 20:44:49.233821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.027 [2024-12-05 20:44:49.233834] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2700, cid 4, qid 0 00:24:56.027 [2024-12-05 20:44:49.233838] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2880, cid 5, qid 0 00:24:56.027 [2024-12-05 20:44:49.233933] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:56.027 [2024-12-05 20:44:49.233938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:56.027 [2024-12-05 20:44:49.233941] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:56.027 [2024-12-05 20:44:49.233944] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2180550): datao=0, datal=1024, cccid=4 00:24:56.027 [2024-12-05 20:44:49.233947] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21e2700) on tqpair(0x2180550): expected_datao=0, payload_size=1024 00:24:56.027 [2024-12-05 20:44:49.233951] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.027 [2024-12-05 20:44:49.233956] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:56.027 [2024-12-05 20:44:49.233959] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:56.027 [2024-12-05 20:44:49.233963] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.027 [2024-12-05 20:44:49.233969] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.027 [2024-12-05 20:44:49.233972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.027 [2024-12-05 20:44:49.233975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2880) on tqpair=0x2180550 00:24:56.027 [2024-12-05 20:44:49.278267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.027 [2024-12-05 20:44:49.278282] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.027 [2024-12-05 20:44:49.278285] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.027 [2024-12-05 20:44:49.278289] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2700) on tqpair=0x2180550 00:24:56.027 [2024-12-05 20:44:49.278301] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.027 [2024-12-05 20:44:49.278305] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2180550) 00:24:56.027 [2024-12-05 20:44:49.278312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.027 [2024-12-05 20:44:49.278328] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2700, cid 4, qid 0 00:24:56.027 [2024-12-05 20:44:49.278491] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:56.027 [2024-12-05 20:44:49.278498] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:56.027 [2024-12-05 20:44:49.278500] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:56.027 [2024-12-05 20:44:49.278503] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2180550): datao=0, datal=3072, cccid=4 00:24:56.027 [2024-12-05 20:44:49.278507] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21e2700) on tqpair(0x2180550): expected_datao=0, payload_size=3072 00:24:56.027 [2024-12-05 20:44:49.278511] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.027 [2024-12-05 20:44:49.278524] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:56.027 [2024-12-05 20:44:49.278528] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:56.027 [2024-12-05 20:44:49.320218] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.027 [2024-12-05 20:44:49.320230] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.027 [2024-12-05 20:44:49.320234] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.027 [2024-12-05 20:44:49.320237] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2700) on tqpair=0x2180550 00:24:56.027 [2024-12-05 20:44:49.320247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.027 [2024-12-05 20:44:49.320250] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2180550) 00:24:56.027 [2024-12-05 20:44:49.320257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.027 [2024-12-05 20:44:49.320270] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2700, cid 4, qid 0 00:24:56.027 [2024-12-05 20:44:49.320378] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:56.027 [2024-12-05 20:44:49.320383] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:56.027 [2024-12-05 20:44:49.320386] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:56.027 [2024-12-05 20:44:49.320389] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2180550): datao=0, datal=8, cccid=4 00:24:56.027 [2024-12-05 20:44:49.320393] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21e2700) on tqpair(0x2180550): expected_datao=0, payload_size=8 00:24:56.027 [2024-12-05 20:44:49.320396] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.027 [2024-12-05 20:44:49.320401] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:56.027 [2024-12-05 20:44:49.320404] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:56.027 [2024-12-05 20:44:49.362212] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.027 [2024-12-05 20:44:49.362222] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.027 [2024-12-05 20:44:49.362228] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.027 [2024-12-05 20:44:49.362231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2700) on tqpair=0x2180550 00:24:56.027 ===================================================== 00:24:56.027 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:56.027 ===================================================== 00:24:56.027 Controller Capabilities/Features 00:24:56.027 ================================ 00:24:56.027 Vendor ID: 0000 00:24:56.027 Subsystem Vendor ID: 0000 00:24:56.027 Serial Number: .................... 00:24:56.027 Model Number: ........................................ 00:24:56.027 Firmware Version: 25.01 00:24:56.027 Recommended Arb Burst: 0 00:24:56.027 IEEE OUI Identifier: 00 00 00 00:24:56.027 Multi-path I/O 00:24:56.027 May have multiple subsystem ports: No 00:24:56.027 May have multiple controllers: No 00:24:56.027 Associated with SR-IOV VF: No 00:24:56.027 Max Data Transfer Size: 131072 00:24:56.027 Max Number of Namespaces: 0 00:24:56.027 Max Number of I/O Queues: 1024 00:24:56.027 NVMe Specification Version (VS): 1.3 00:24:56.027 NVMe Specification Version (Identify): 1.3 00:24:56.027 Maximum Queue Entries: 128 00:24:56.027 Contiguous Queues Required: Yes 00:24:56.027 Arbitration Mechanisms Supported 00:24:56.027 Weighted Round Robin: Not Supported 00:24:56.027 Vendor Specific: Not Supported 00:24:56.027 Reset Timeout: 15000 ms 00:24:56.027 Doorbell Stride: 4 bytes 00:24:56.027 NVM Subsystem Reset: Not Supported 00:24:56.027 Command Sets Supported 00:24:56.027 NVM Command Set: Supported 00:24:56.027 Boot Partition: Not Supported 00:24:56.027 Memory Page Size Minimum: 4096 bytes 00:24:56.027 Memory Page Size Maximum: 4096 bytes 00:24:56.027 Persistent Memory Region: Not Supported 00:24:56.027 Optional Asynchronous Events Supported 00:24:56.027 Namespace Attribute Notices: Not Supported 00:24:56.027 Firmware Activation Notices: Not Supported 00:24:56.027 ANA Change Notices: Not Supported 00:24:56.027 PLE Aggregate Log Change Notices: Not Supported 00:24:56.027 LBA Status Info Alert Notices: Not Supported 00:24:56.027 EGE Aggregate Log Change Notices: Not Supported 00:24:56.027 Normal NVM Subsystem Shutdown event: Not Supported 00:24:56.027 Zone Descriptor Change Notices: Not Supported 00:24:56.027 Discovery Log Change Notices: Supported 00:24:56.027 Controller Attributes 00:24:56.027 128-bit Host Identifier: Not Supported 00:24:56.027 Non-Operational Permissive Mode: Not Supported 00:24:56.027 NVM Sets: Not Supported 00:24:56.027 Read Recovery Levels: Not Supported 00:24:56.027 Endurance Groups: Not Supported 00:24:56.027 Predictable Latency Mode: Not Supported 00:24:56.027 Traffic Based Keep ALive: Not Supported 00:24:56.027 Namespace Granularity: Not Supported 00:24:56.027 SQ Associations: Not Supported 00:24:56.027 UUID List: Not Supported 00:24:56.027 Multi-Domain Subsystem: Not Supported 00:24:56.027 Fixed Capacity Management: Not Supported 00:24:56.027 Variable Capacity Management: Not Supported 00:24:56.027 Delete Endurance Group: Not Supported 00:24:56.027 Delete NVM Set: Not Supported 00:24:56.027 Extended LBA Formats Supported: Not Supported 00:24:56.027 Flexible Data Placement Supported: Not Supported 00:24:56.027 00:24:56.027 Controller Memory Buffer Support 00:24:56.027 ================================ 00:24:56.027 Supported: No 00:24:56.027 00:24:56.027 Persistent Memory Region Support 00:24:56.027 ================================ 00:24:56.027 Supported: No 00:24:56.027 00:24:56.027 Admin Command Set Attributes 00:24:56.027 ============================ 00:24:56.027 Security Send/Receive: Not Supported 00:24:56.027 Format NVM: Not Supported 00:24:56.027 Firmware Activate/Download: Not Supported 00:24:56.027 Namespace Management: Not Supported 00:24:56.027 Device Self-Test: Not Supported 00:24:56.027 Directives: Not Supported 00:24:56.027 NVMe-MI: Not Supported 00:24:56.027 Virtualization Management: Not Supported 00:24:56.027 Doorbell Buffer Config: Not Supported 00:24:56.027 Get LBA Status Capability: Not Supported 00:24:56.027 Command & Feature Lockdown Capability: Not Supported 00:24:56.027 Abort Command Limit: 1 00:24:56.027 Async Event Request Limit: 4 00:24:56.027 Number of Firmware Slots: N/A 00:24:56.027 Firmware Slot 1 Read-Only: N/A 00:24:56.027 Firmware Activation Without Reset: N/A 00:24:56.027 Multiple Update Detection Support: N/A 00:24:56.027 Firmware Update Granularity: No Information Provided 00:24:56.027 Per-Namespace SMART Log: No 00:24:56.027 Asymmetric Namespace Access Log Page: Not Supported 00:24:56.027 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:56.027 Command Effects Log Page: Not Supported 00:24:56.027 Get Log Page Extended Data: Supported 00:24:56.027 Telemetry Log Pages: Not Supported 00:24:56.027 Persistent Event Log Pages: Not Supported 00:24:56.027 Supported Log Pages Log Page: May Support 00:24:56.027 Commands Supported & Effects Log Page: Not Supported 00:24:56.027 Feature Identifiers & Effects Log Page:May Support 00:24:56.027 NVMe-MI Commands & Effects Log Page: May Support 00:24:56.027 Data Area 4 for Telemetry Log: Not Supported 00:24:56.027 Error Log Page Entries Supported: 128 00:24:56.027 Keep Alive: Not Supported 00:24:56.027 00:24:56.027 NVM Command Set Attributes 00:24:56.027 ========================== 00:24:56.027 Submission Queue Entry Size 00:24:56.027 Max: 1 00:24:56.027 Min: 1 00:24:56.027 Completion Queue Entry Size 00:24:56.027 Max: 1 00:24:56.027 Min: 1 00:24:56.027 Number of Namespaces: 0 00:24:56.027 Compare Command: Not Supported 00:24:56.027 Write Uncorrectable Command: Not Supported 00:24:56.027 Dataset Management Command: Not Supported 00:24:56.027 Write Zeroes Command: Not Supported 00:24:56.027 Set Features Save Field: Not Supported 00:24:56.027 Reservations: Not Supported 00:24:56.027 Timestamp: Not Supported 00:24:56.027 Copy: Not Supported 00:24:56.027 Volatile Write Cache: Not Present 00:24:56.027 Atomic Write Unit (Normal): 1 00:24:56.027 Atomic Write Unit (PFail): 1 00:24:56.027 Atomic Compare & Write Unit: 1 00:24:56.027 Fused Compare & Write: Supported 00:24:56.027 Scatter-Gather List 00:24:56.027 SGL Command Set: Supported 00:24:56.027 SGL Keyed: Supported 00:24:56.027 SGL Bit Bucket Descriptor: Not Supported 00:24:56.027 SGL Metadata Pointer: Not Supported 00:24:56.027 Oversized SGL: Not Supported 00:24:56.027 SGL Metadata Address: Not Supported 00:24:56.027 SGL Offset: Supported 00:24:56.027 Transport SGL Data Block: Not Supported 00:24:56.027 Replay Protected Memory Block: Not Supported 00:24:56.027 00:24:56.027 Firmware Slot Information 00:24:56.027 ========================= 00:24:56.027 Active slot: 0 00:24:56.027 00:24:56.027 00:24:56.027 Error Log 00:24:56.027 ========= 00:24:56.027 00:24:56.027 Active Namespaces 00:24:56.027 ================= 00:24:56.027 Discovery Log Page 00:24:56.027 ================== 00:24:56.027 Generation Counter: 2 00:24:56.027 Number of Records: 2 00:24:56.027 Record Format: 0 00:24:56.027 00:24:56.027 Discovery Log Entry 0 00:24:56.027 ---------------------- 00:24:56.027 Transport Type: 3 (TCP) 00:24:56.027 Address Family: 1 (IPv4) 00:24:56.027 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:56.027 Entry Flags: 00:24:56.027 Duplicate Returned Information: 1 00:24:56.027 Explicit Persistent Connection Support for Discovery: 1 00:24:56.027 Transport Requirements: 00:24:56.027 Secure Channel: Not Required 00:24:56.027 Port ID: 0 (0x0000) 00:24:56.027 Controller ID: 65535 (0xffff) 00:24:56.027 Admin Max SQ Size: 128 00:24:56.027 Transport Service Identifier: 4420 00:24:56.028 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:56.028 Transport Address: 10.0.0.2 00:24:56.028 Discovery Log Entry 1 00:24:56.028 ---------------------- 00:24:56.028 Transport Type: 3 (TCP) 00:24:56.028 Address Family: 1 (IPv4) 00:24:56.028 Subsystem Type: 2 (NVM Subsystem) 00:24:56.028 Entry Flags: 00:24:56.028 Duplicate Returned Information: 0 00:24:56.028 Explicit Persistent Connection Support for Discovery: 0 00:24:56.028 Transport Requirements: 00:24:56.028 Secure Channel: Not Required 00:24:56.028 Port ID: 0 (0x0000) 00:24:56.028 Controller ID: 65535 (0xffff) 00:24:56.028 Admin Max SQ Size: 128 00:24:56.028 Transport Service Identifier: 4420 00:24:56.028 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:56.028 Transport Address: 10.0.0.2 [2024-12-05 20:44:49.362304] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:56.028 [2024-12-05 20:44:49.362314] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2100) on tqpair=0x2180550 00:24:56.028 [2024-12-05 20:44:49.362320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.028 [2024-12-05 20:44:49.362324] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2280) on tqpair=0x2180550 00:24:56.028 [2024-12-05 20:44:49.362328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.028 [2024-12-05 20:44:49.362332] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2400) on tqpair=0x2180550 00:24:56.028 [2024-12-05 20:44:49.362336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.028 [2024-12-05 20:44:49.362339] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2580) on tqpair=0x2180550 00:24:56.028 [2024-12-05 20:44:49.362343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.028 [2024-12-05 20:44:49.362353] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.362356] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.362359] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2180550) 00:24:56.028 [2024-12-05 20:44:49.362365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.028 [2024-12-05 20:44:49.362378] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2580, cid 3, qid 0 00:24:56.028 [2024-12-05 20:44:49.362435] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.028 [2024-12-05 20:44:49.362440] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.028 [2024-12-05 20:44:49.362443] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.362446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2580) on tqpair=0x2180550 00:24:56.028 [2024-12-05 20:44:49.362452] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.362455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.362457] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2180550) 00:24:56.028 [2024-12-05 20:44:49.362463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.028 [2024-12-05 20:44:49.362474] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2580, cid 3, qid 0 00:24:56.028 [2024-12-05 20:44:49.362550] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.028 [2024-12-05 20:44:49.362555] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.028 [2024-12-05 20:44:49.362558] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.362561] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2580) on tqpair=0x2180550 00:24:56.028 [2024-12-05 20:44:49.362565] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:56.028 [2024-12-05 20:44:49.362569] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:56.028 [2024-12-05 20:44:49.362576] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.362579] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.362582] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2180550) 00:24:56.028 [2024-12-05 20:44:49.362589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.028 [2024-12-05 20:44:49.362598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2580, cid 3, qid 0 00:24:56.028 [2024-12-05 20:44:49.362664] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.028 [2024-12-05 20:44:49.362669] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.028 [2024-12-05 20:44:49.362671] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.362674] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2580) on tqpair=0x2180550 00:24:56.028 [2024-12-05 20:44:49.362683] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.362686] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.362689] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2180550) 00:24:56.028 [2024-12-05 20:44:49.362694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.028 [2024-12-05 20:44:49.362703] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2580, cid 3, qid 0 00:24:56.028 [2024-12-05 20:44:49.362762] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.028 [2024-12-05 20:44:49.362767] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.028 [2024-12-05 20:44:49.362770] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.362772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2580) on tqpair=0x2180550 00:24:56.028 [2024-12-05 20:44:49.362780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.362783] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.362786] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2180550) 00:24:56.028 [2024-12-05 20:44:49.362791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.028 [2024-12-05 20:44:49.362799] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2580, cid 3, qid 0 00:24:56.028 [2024-12-05 20:44:49.362857] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.028 [2024-12-05 20:44:49.362863] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.028 [2024-12-05 20:44:49.362865] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.362868] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2580) on tqpair=0x2180550 00:24:56.028 [2024-12-05 20:44:49.362876] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.362879] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.362881] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2180550) 00:24:56.028 [2024-12-05 20:44:49.362887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.028 [2024-12-05 20:44:49.362895] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2580, cid 3, qid 0 00:24:56.028 [2024-12-05 20:44:49.362953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.028 [2024-12-05 20:44:49.362958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.028 [2024-12-05 20:44:49.362960] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.362963] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2580) on tqpair=0x2180550 00:24:56.028 [2024-12-05 20:44:49.362970] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.362974] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.362977] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2180550) 00:24:56.028 [2024-12-05 20:44:49.362982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.028 [2024-12-05 20:44:49.362992] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2580, cid 3, qid 0 00:24:56.028 [2024-12-05 20:44:49.363048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.028 [2024-12-05 20:44:49.363053] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.028 [2024-12-05 20:44:49.363056] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.363064] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2580) on tqpair=0x2180550 00:24:56.028 [2024-12-05 20:44:49.363071] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.363074] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.363077] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2180550) 00:24:56.028 [2024-12-05 20:44:49.363082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.028 [2024-12-05 20:44:49.363090] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2580, cid 3, qid 0 00:24:56.028 [2024-12-05 20:44:49.363149] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.028 [2024-12-05 20:44:49.363154] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.028 [2024-12-05 20:44:49.363157] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.363160] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2580) on tqpair=0x2180550 00:24:56.028 [2024-12-05 20:44:49.363167] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.363170] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.363173] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2180550) 00:24:56.028 [2024-12-05 20:44:49.363178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.028 [2024-12-05 20:44:49.363186] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2580, cid 3, qid 0 00:24:56.028 [2024-12-05 20:44:49.363245] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.028 [2024-12-05 20:44:49.363250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.028 [2024-12-05 20:44:49.363253] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.363256] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2580) on tqpair=0x2180550 00:24:56.028 [2024-12-05 20:44:49.363263] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.363266] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.363269] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2180550) 00:24:56.028 [2024-12-05 20:44:49.363274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.028 [2024-12-05 20:44:49.363282] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2580, cid 3, qid 0 00:24:56.028 [2024-12-05 20:44:49.363335] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.028 [2024-12-05 20:44:49.363340] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.028 [2024-12-05 20:44:49.363342] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.363345] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2580) on tqpair=0x2180550 00:24:56.028 [2024-12-05 20:44:49.363353] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.363356] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.363358] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2180550) 00:24:56.028 [2024-12-05 20:44:49.363363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.028 [2024-12-05 20:44:49.363374] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2580, cid 3, qid 0 00:24:56.028 [2024-12-05 20:44:49.363429] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.028 [2024-12-05 20:44:49.363434] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.028 [2024-12-05 20:44:49.363437] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.363440] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2580) on tqpair=0x2180550 00:24:56.028 [2024-12-05 20:44:49.363447] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.363450] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.363453] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2180550) 00:24:56.028 [2024-12-05 20:44:49.363458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.028 [2024-12-05 20:44:49.363467] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2580, cid 3, qid 0 00:24:56.028 [2024-12-05 20:44:49.363529] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.028 [2024-12-05 20:44:49.363534] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.028 [2024-12-05 20:44:49.363537] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.363540] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2580) on tqpair=0x2180550 00:24:56.028 [2024-12-05 20:44:49.363547] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.363550] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.363553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2180550) 00:24:56.028 [2024-12-05 20:44:49.363558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.028 [2024-12-05 20:44:49.363566] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2580, cid 3, qid 0 00:24:56.028 [2024-12-05 20:44:49.363624] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.028 [2024-12-05 20:44:49.363629] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.028 [2024-12-05 20:44:49.363632] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.363635] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2580) on tqpair=0x2180550 00:24:56.028 [2024-12-05 20:44:49.363642] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.363645] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.363648] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2180550) 00:24:56.028 [2024-12-05 20:44:49.363653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.028 [2024-12-05 20:44:49.363661] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2580, cid 3, qid 0 00:24:56.028 [2024-12-05 20:44:49.363717] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.028 [2024-12-05 20:44:49.363722] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.028 [2024-12-05 20:44:49.363725] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.363728] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2580) on tqpair=0x2180550 00:24:56.028 [2024-12-05 20:44:49.363735] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.363738] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.028 [2024-12-05 20:44:49.363741] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2180550) 00:24:56.028 [2024-12-05 20:44:49.363746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.028 [2024-12-05 20:44:49.363754] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2580, cid 3, qid 0 00:24:56.028 [2024-12-05 20:44:49.363811] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.028 [2024-12-05 20:44:49.363816] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.028 [2024-12-05 20:44:49.363819] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.029 [2024-12-05 20:44:49.363822] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2580) on tqpair=0x2180550 00:24:56.029 [2024-12-05 20:44:49.363829] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.029 [2024-12-05 20:44:49.363832] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.029 [2024-12-05 20:44:49.363835] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2180550) 00:24:56.029 [2024-12-05 20:44:49.363840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.029 [2024-12-05 20:44:49.363848] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2580, cid 3, qid 0 00:24:56.029 [2024-12-05 20:44:49.363908] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.029 [2024-12-05 20:44:49.363913] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.029 [2024-12-05 20:44:49.363916] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.029 [2024-12-05 20:44:49.363919] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2580) on tqpair=0x2180550 00:24:56.029 [2024-12-05 20:44:49.363926] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.029 [2024-12-05 20:44:49.363929] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.029 [2024-12-05 20:44:49.363932] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2180550) 00:24:56.029 [2024-12-05 20:44:49.363937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.029 [2024-12-05 20:44:49.363945] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2580, cid 3, qid 0 00:24:56.029 [2024-12-05 20:44:49.364003] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.029 [2024-12-05 20:44:49.364008] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.029 [2024-12-05 20:44:49.364011] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.029 [2024-12-05 20:44:49.364014] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2580) on tqpair=0x2180550 00:24:56.029 [2024-12-05 20:44:49.364021] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.029 [2024-12-05 20:44:49.364024] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.029 [2024-12-05 20:44:49.364027] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2180550) 00:24:56.029 [2024-12-05 20:44:49.364032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.029 [2024-12-05 20:44:49.364040] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2580, cid 3, qid 0 00:24:56.029 [2024-12-05 20:44:49.368063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.029 [2024-12-05 20:44:49.368071] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.029 [2024-12-05 20:44:49.368074] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.029 [2024-12-05 20:44:49.368076] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2580) on tqpair=0x2180550 00:24:56.029 [2024-12-05 20:44:49.368086] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.029 [2024-12-05 20:44:49.368089] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.029 [2024-12-05 20:44:49.368092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2180550) 00:24:56.029 [2024-12-05 20:44:49.368097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.029 [2024-12-05 20:44:49.368106] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21e2580, cid 3, qid 0 00:24:56.029 [2024-12-05 20:44:49.368250] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.029 [2024-12-05 20:44:49.368258] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.029 [2024-12-05 20:44:49.368261] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.029 [2024-12-05 20:44:49.368264] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21e2580) on tqpair=0x2180550 00:24:56.029 [2024-12-05 20:44:49.368269] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:24:56.029 00:24:56.029 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:56.029 [2024-12-05 20:44:49.406487] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:24:56.029 [2024-12-05 20:44:49.406534] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid452208 ] 00:24:56.029 [2024-12-05 20:44:49.444042] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:56.029 [2024-12-05 20:44:49.444085] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:56.029 [2024-12-05 20:44:49.444090] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:56.029 [2024-12-05 20:44:49.444102] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:56.029 [2024-12-05 20:44:49.444108] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:56.029 [2024-12-05 20:44:49.448244] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:56.029 [2024-12-05 20:44:49.448274] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x10b6550 0 00:24:56.029 [2024-12-05 20:44:49.456072] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:56.029 [2024-12-05 20:44:49.456084] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:56.029 [2024-12-05 20:44:49.456087] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:56.029 [2024-12-05 20:44:49.456090] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:56.029 [2024-12-05 20:44:49.456116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.029 [2024-12-05 20:44:49.456120] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.029 [2024-12-05 20:44:49.456123] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b6550) 00:24:56.029 [2024-12-05 20:44:49.456133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:56.029 [2024-12-05 20:44:49.456148] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118100, cid 0, qid 0 00:24:56.029 [2024-12-05 20:44:49.463066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.029 [2024-12-05 20:44:49.463074] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.029 [2024-12-05 20:44:49.463077] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.029 [2024-12-05 20:44:49.463080] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118100) on tqpair=0x10b6550 00:24:56.029 [2024-12-05 20:44:49.463091] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:56.029 [2024-12-05 20:44:49.463096] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:56.029 [2024-12-05 20:44:49.463101] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:56.029 [2024-12-05 20:44:49.463110] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.029 [2024-12-05 20:44:49.463115] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.029 [2024-12-05 20:44:49.463118] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b6550) 00:24:56.029 [2024-12-05 20:44:49.463124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.029 [2024-12-05 20:44:49.463136] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118100, cid 0, qid 0 00:24:56.029 [2024-12-05 20:44:49.463291] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.029 [2024-12-05 20:44:49.463296] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.029 [2024-12-05 20:44:49.463298] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.029 [2024-12-05 20:44:49.463301] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118100) on tqpair=0x10b6550 00:24:56.029 [2024-12-05 20:44:49.463305] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:56.029 [2024-12-05 20:44:49.463311] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:56.029 [2024-12-05 20:44:49.463316] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.029 [2024-12-05 20:44:49.463319] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.029 [2024-12-05 20:44:49.463322] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b6550) 00:24:56.029 [2024-12-05 20:44:49.463327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.029 [2024-12-05 20:44:49.463336] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118100, cid 0, qid 0 00:24:56.029 [2024-12-05 20:44:49.463398] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.029 [2024-12-05 20:44:49.463403] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.029 [2024-12-05 20:44:49.463406] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.029 [2024-12-05 20:44:49.463409] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118100) on tqpair=0x10b6550 00:24:56.029 [2024-12-05 20:44:49.463413] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:56.029 [2024-12-05 20:44:49.463419] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:56.029 [2024-12-05 20:44:49.463424] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.029 [2024-12-05 20:44:49.463427] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.029 [2024-12-05 20:44:49.463430] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b6550) 00:24:56.029 [2024-12-05 20:44:49.463435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.029 [2024-12-05 20:44:49.463444] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118100, cid 0, qid 0 00:24:56.029 [2024-12-05 20:44:49.463505] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.029 [2024-12-05 20:44:49.463510] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.029 [2024-12-05 20:44:49.463513] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.029 [2024-12-05 20:44:49.463516] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118100) on tqpair=0x10b6550 00:24:56.029 [2024-12-05 20:44:49.463520] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:56.029 [2024-12-05 20:44:49.463527] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.029 [2024-12-05 20:44:49.463530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.029 [2024-12-05 20:44:49.463533] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b6550) 00:24:56.029 [2024-12-05 20:44:49.463538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.029 [2024-12-05 20:44:49.463548] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118100, cid 0, qid 0 00:24:56.029 [2024-12-05 20:44:49.463602] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.029 [2024-12-05 20:44:49.463607] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.029 [2024-12-05 20:44:49.463610] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.029 [2024-12-05 20:44:49.463613] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118100) on tqpair=0x10b6550 00:24:56.029 [2024-12-05 20:44:49.463616] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:56.029 [2024-12-05 20:44:49.463620] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:56.029 [2024-12-05 20:44:49.463626] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:56.029 [2024-12-05 20:44:49.463733] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:56.029 [2024-12-05 20:44:49.463738] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:56.029 [2024-12-05 20:44:49.463744] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.029 [2024-12-05 20:44:49.463747] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.029 [2024-12-05 20:44:49.463750] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b6550) 00:24:56.029 [2024-12-05 20:44:49.463755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.029 [2024-12-05 20:44:49.463764] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118100, cid 0, qid 0 00:24:56.292 [2024-12-05 20:44:49.463820] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.292 [2024-12-05 20:44:49.463827] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.292 [2024-12-05 20:44:49.463830] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.292 [2024-12-05 20:44:49.463833] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118100) on tqpair=0x10b6550 00:24:56.292 [2024-12-05 20:44:49.463837] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:56.292 [2024-12-05 20:44:49.463846] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.292 [2024-12-05 20:44:49.463851] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.292 [2024-12-05 20:44:49.463853] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b6550) 00:24:56.292 [2024-12-05 20:44:49.463858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.292 [2024-12-05 20:44:49.463868] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118100, cid 0, qid 0 00:24:56.292 [2024-12-05 20:44:49.463921] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.292 [2024-12-05 20:44:49.463928] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.292 [2024-12-05 20:44:49.463930] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.292 [2024-12-05 20:44:49.463933] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118100) on tqpair=0x10b6550 00:24:56.292 [2024-12-05 20:44:49.463937] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:56.292 [2024-12-05 20:44:49.463941] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:56.292 [2024-12-05 20:44:49.463947] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:56.292 [2024-12-05 20:44:49.463961] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:56.292 [2024-12-05 20:44:49.463968] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.292 [2024-12-05 20:44:49.463971] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b6550) 00:24:56.292 [2024-12-05 20:44:49.463976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.292 [2024-12-05 20:44:49.463985] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118100, cid 0, qid 0 00:24:56.292 [2024-12-05 20:44:49.464083] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:56.292 [2024-12-05 20:44:49.464089] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:56.292 [2024-12-05 20:44:49.464092] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:56.292 [2024-12-05 20:44:49.464095] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10b6550): datao=0, datal=4096, cccid=0 00:24:56.292 [2024-12-05 20:44:49.464098] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1118100) on tqpair(0x10b6550): expected_datao=0, payload_size=4096 00:24:56.292 [2024-12-05 20:44:49.464101] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.292 [2024-12-05 20:44:49.464112] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:56.292 [2024-12-05 20:44:49.464115] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:56.292 [2024-12-05 20:44:49.464137] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.292 [2024-12-05 20:44:49.464142] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.292 [2024-12-05 20:44:49.464145] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.292 [2024-12-05 20:44:49.464147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118100) on tqpair=0x10b6550 00:24:56.292 [2024-12-05 20:44:49.464153] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:56.292 [2024-12-05 20:44:49.464157] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:56.292 [2024-12-05 20:44:49.464162] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:56.292 [2024-12-05 20:44:49.464165] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:56.292 [2024-12-05 20:44:49.464169] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:56.292 [2024-12-05 20:44:49.464173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:56.292 [2024-12-05 20:44:49.464180] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:56.292 [2024-12-05 20:44:49.464185] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.292 [2024-12-05 20:44:49.464188] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.292 [2024-12-05 20:44:49.464191] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b6550) 00:24:56.292 [2024-12-05 20:44:49.464196] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:56.292 [2024-12-05 20:44:49.464206] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118100, cid 0, qid 0 00:24:56.292 [2024-12-05 20:44:49.464265] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.292 [2024-12-05 20:44:49.464271] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.292 [2024-12-05 20:44:49.464273] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.292 [2024-12-05 20:44:49.464276] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118100) on tqpair=0x10b6550 00:24:56.292 [2024-12-05 20:44:49.464282] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.292 [2024-12-05 20:44:49.464285] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.292 [2024-12-05 20:44:49.464288] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b6550) 00:24:56.292 [2024-12-05 20:44:49.464293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.292 [2024-12-05 20:44:49.464297] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.292 [2024-12-05 20:44:49.464300] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.292 [2024-12-05 20:44:49.464303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x10b6550) 00:24:56.292 [2024-12-05 20:44:49.464307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.292 [2024-12-05 20:44:49.464312] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.292 [2024-12-05 20:44:49.464315] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.292 [2024-12-05 20:44:49.464317] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x10b6550) 00:24:56.292 [2024-12-05 20:44:49.464321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.292 [2024-12-05 20:44:49.464326] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.292 [2024-12-05 20:44:49.464329] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.292 [2024-12-05 20:44:49.464331] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.292 [2024-12-05 20:44:49.464336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.292 [2024-12-05 20:44:49.464340] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:56.292 [2024-12-05 20:44:49.464349] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:56.292 [2024-12-05 20:44:49.464354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.292 [2024-12-05 20:44:49.464356] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10b6550) 00:24:56.293 [2024-12-05 20:44:49.464361] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.293 [2024-12-05 20:44:49.464371] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118100, cid 0, qid 0 00:24:56.293 [2024-12-05 20:44:49.464375] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118280, cid 1, qid 0 00:24:56.293 [2024-12-05 20:44:49.464379] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118400, cid 2, qid 0 00:24:56.293 [2024-12-05 20:44:49.464383] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.293 [2024-12-05 20:44:49.464386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118700, cid 4, qid 0 00:24:56.293 [2024-12-05 20:44:49.464479] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.293 [2024-12-05 20:44:49.464484] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.293 [2024-12-05 20:44:49.464486] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.293 [2024-12-05 20:44:49.464489] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118700) on tqpair=0x10b6550 00:24:56.293 [2024-12-05 20:44:49.464493] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:56.293 [2024-12-05 20:44:49.464497] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:56.293 [2024-12-05 20:44:49.464503] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:56.293 [2024-12-05 20:44:49.464510] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:56.293 [2024-12-05 20:44:49.464515] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.293 [2024-12-05 20:44:49.464518] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.293 [2024-12-05 20:44:49.464520] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10b6550) 00:24:56.293 [2024-12-05 20:44:49.464525] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:56.293 [2024-12-05 20:44:49.464534] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118700, cid 4, qid 0 00:24:56.293 [2024-12-05 20:44:49.464593] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.293 [2024-12-05 20:44:49.464598] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.293 [2024-12-05 20:44:49.464601] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.293 [2024-12-05 20:44:49.464603] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118700) on tqpair=0x10b6550 00:24:56.293 [2024-12-05 20:44:49.464652] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:56.293 [2024-12-05 20:44:49.464660] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:56.293 [2024-12-05 20:44:49.464666] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.293 [2024-12-05 20:44:49.464669] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10b6550) 00:24:56.293 [2024-12-05 20:44:49.464674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.293 [2024-12-05 20:44:49.464683] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118700, cid 4, qid 0 00:24:56.293 [2024-12-05 20:44:49.464756] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:56.293 [2024-12-05 20:44:49.464760] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:56.293 [2024-12-05 20:44:49.464763] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:56.293 [2024-12-05 20:44:49.464766] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10b6550): datao=0, datal=4096, cccid=4 00:24:56.293 [2024-12-05 20:44:49.464769] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1118700) on tqpair(0x10b6550): expected_datao=0, payload_size=4096 00:24:56.293 [2024-12-05 20:44:49.464773] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.293 [2024-12-05 20:44:49.464782] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:56.293 [2024-12-05 20:44:49.464785] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:56.293 [2024-12-05 20:44:49.464797] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.293 [2024-12-05 20:44:49.464802] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.293 [2024-12-05 20:44:49.464805] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.293 [2024-12-05 20:44:49.464808] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118700) on tqpair=0x10b6550 00:24:56.293 [2024-12-05 20:44:49.464815] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:56.293 [2024-12-05 20:44:49.464823] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:56.293 [2024-12-05 20:44:49.464830] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:56.293 [2024-12-05 20:44:49.464835] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.293 [2024-12-05 20:44:49.464838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10b6550) 00:24:56.293 [2024-12-05 20:44:49.464844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.293 [2024-12-05 20:44:49.464854] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118700, cid 4, qid 0 00:24:56.293 [2024-12-05 20:44:49.464935] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:56.293 [2024-12-05 20:44:49.464941] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:56.293 [2024-12-05 20:44:49.464944] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:56.293 [2024-12-05 20:44:49.464946] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10b6550): datao=0, datal=4096, cccid=4 00:24:56.293 [2024-12-05 20:44:49.464950] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1118700) on tqpair(0x10b6550): expected_datao=0, payload_size=4096 00:24:56.293 [2024-12-05 20:44:49.464953] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.293 [2024-12-05 20:44:49.464963] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:56.293 [2024-12-05 20:44:49.464967] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:56.293 [2024-12-05 20:44:49.505197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.293 [2024-12-05 20:44:49.505208] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.293 [2024-12-05 20:44:49.505211] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.293 [2024-12-05 20:44:49.505215] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118700) on tqpair=0x10b6550 00:24:56.293 [2024-12-05 20:44:49.505226] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:56.293 [2024-12-05 20:44:49.505235] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:56.293 [2024-12-05 20:44:49.505241] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.293 [2024-12-05 20:44:49.505244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10b6550) 00:24:56.293 [2024-12-05 20:44:49.505250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.293 [2024-12-05 20:44:49.505261] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118700, cid 4, qid 0 00:24:56.293 [2024-12-05 20:44:49.505332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:56.293 [2024-12-05 20:44:49.505338] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:56.293 [2024-12-05 20:44:49.505340] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:56.293 [2024-12-05 20:44:49.505343] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10b6550): datao=0, datal=4096, cccid=4 00:24:56.293 [2024-12-05 20:44:49.505347] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1118700) on tqpair(0x10b6550): expected_datao=0, payload_size=4096 00:24:56.293 [2024-12-05 20:44:49.505350] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.293 [2024-12-05 20:44:49.505361] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:56.293 [2024-12-05 20:44:49.505365] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:56.293 [2024-12-05 20:44:49.551067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.293 [2024-12-05 20:44:49.551078] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.293 [2024-12-05 20:44:49.551081] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.293 [2024-12-05 20:44:49.551084] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118700) on tqpair=0x10b6550 00:24:56.293 [2024-12-05 20:44:49.551091] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:56.293 [2024-12-05 20:44:49.551099] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:56.293 [2024-12-05 20:44:49.551110] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:56.293 [2024-12-05 20:44:49.551115] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:56.293 [2024-12-05 20:44:49.551119] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:56.293 [2024-12-05 20:44:49.551124] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:56.293 [2024-12-05 20:44:49.551128] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:56.293 [2024-12-05 20:44:49.551132] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:56.293 [2024-12-05 20:44:49.551136] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:56.293 [2024-12-05 20:44:49.551148] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.293 [2024-12-05 20:44:49.551152] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10b6550) 00:24:56.293 [2024-12-05 20:44:49.551159] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.293 [2024-12-05 20:44:49.551164] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.293 [2024-12-05 20:44:49.551167] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.294 [2024-12-05 20:44:49.551169] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10b6550) 00:24:56.294 [2024-12-05 20:44:49.551174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.294 [2024-12-05 20:44:49.551188] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118700, cid 4, qid 0 00:24:56.294 [2024-12-05 20:44:49.551193] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118880, cid 5, qid 0 00:24:56.294 [2024-12-05 20:44:49.551268] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.294 [2024-12-05 20:44:49.551274] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.294 [2024-12-05 20:44:49.551277] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.294 [2024-12-05 20:44:49.551280] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118700) on tqpair=0x10b6550 00:24:56.294 [2024-12-05 20:44:49.551286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.294 [2024-12-05 20:44:49.551290] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.294 [2024-12-05 20:44:49.551292] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.294 [2024-12-05 20:44:49.551295] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118880) on tqpair=0x10b6550 00:24:56.294 [2024-12-05 20:44:49.551303] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.294 [2024-12-05 20:44:49.551307] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10b6550) 00:24:56.294 [2024-12-05 20:44:49.551312] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.294 [2024-12-05 20:44:49.551320] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118880, cid 5, qid 0 00:24:56.294 [2024-12-05 20:44:49.551382] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.294 [2024-12-05 20:44:49.551388] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.294 [2024-12-05 20:44:49.551391] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.294 [2024-12-05 20:44:49.551394] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118880) on tqpair=0x10b6550 00:24:56.294 [2024-12-05 20:44:49.551402] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.294 [2024-12-05 20:44:49.551408] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10b6550) 00:24:56.294 [2024-12-05 20:44:49.551412] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.294 [2024-12-05 20:44:49.551421] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118880, cid 5, qid 0 00:24:56.294 [2024-12-05 20:44:49.551482] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.294 [2024-12-05 20:44:49.551487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.294 [2024-12-05 20:44:49.551489] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.294 [2024-12-05 20:44:49.551492] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118880) on tqpair=0x10b6550 00:24:56.294 [2024-12-05 20:44:49.551499] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.294 [2024-12-05 20:44:49.551503] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10b6550) 00:24:56.294 [2024-12-05 20:44:49.551509] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.294 [2024-12-05 20:44:49.551518] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118880, cid 5, qid 0 00:24:56.294 [2024-12-05 20:44:49.551572] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.294 [2024-12-05 20:44:49.551577] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.294 [2024-12-05 20:44:49.551580] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.294 [2024-12-05 20:44:49.551582] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118880) on tqpair=0x10b6550 00:24:56.294 [2024-12-05 20:44:49.551597] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.294 [2024-12-05 20:44:49.551600] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10b6550) 00:24:56.294 [2024-12-05 20:44:49.551605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.294 [2024-12-05 20:44:49.551611] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.294 [2024-12-05 20:44:49.551613] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10b6550) 00:24:56.294 [2024-12-05 20:44:49.551618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.294 [2024-12-05 20:44:49.551625] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.294 [2024-12-05 20:44:49.551630] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x10b6550) 00:24:56.294 [2024-12-05 20:44:49.551635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.294 [2024-12-05 20:44:49.551641] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.294 [2024-12-05 20:44:49.551644] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x10b6550) 00:24:56.294 [2024-12-05 20:44:49.551648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.294 [2024-12-05 20:44:49.551659] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118880, cid 5, qid 0 00:24:56.294 [2024-12-05 20:44:49.551663] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118700, cid 4, qid 0 00:24:56.294 [2024-12-05 20:44:49.551666] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118a00, cid 6, qid 0 00:24:56.294 [2024-12-05 20:44:49.551670] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118b80, cid 7, qid 0 00:24:56.294 [2024-12-05 20:44:49.551794] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:56.294 [2024-12-05 20:44:49.551799] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:56.294 [2024-12-05 20:44:49.551804] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:56.294 [2024-12-05 20:44:49.551807] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10b6550): datao=0, datal=8192, cccid=5 00:24:56.294 [2024-12-05 20:44:49.551810] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1118880) on tqpair(0x10b6550): expected_datao=0, payload_size=8192 00:24:56.294 [2024-12-05 20:44:49.551814] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.294 [2024-12-05 20:44:49.551838] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:56.294 [2024-12-05 20:44:49.551842] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:56.294 [2024-12-05 20:44:49.551846] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:56.294 [2024-12-05 20:44:49.551850] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:56.294 [2024-12-05 20:44:49.551853] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:56.294 [2024-12-05 20:44:49.551856] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10b6550): datao=0, datal=512, cccid=4 00:24:56.294 [2024-12-05 20:44:49.551859] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1118700) on tqpair(0x10b6550): expected_datao=0, payload_size=512 00:24:56.294 [2024-12-05 20:44:49.551862] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.294 [2024-12-05 20:44:49.551867] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:56.294 [2024-12-05 20:44:49.551870] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:56.294 [2024-12-05 20:44:49.551874] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:56.294 [2024-12-05 20:44:49.551878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:56.294 [2024-12-05 20:44:49.551881] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:56.294 [2024-12-05 20:44:49.551883] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10b6550): datao=0, datal=512, cccid=6 00:24:56.294 [2024-12-05 20:44:49.551887] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1118a00) on tqpair(0x10b6550): expected_datao=0, payload_size=512 00:24:56.294 [2024-12-05 20:44:49.551892] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.294 [2024-12-05 20:44:49.551898] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:56.294 [2024-12-05 20:44:49.551900] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:56.294 [2024-12-05 20:44:49.551904] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:56.294 [2024-12-05 20:44:49.551909] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:56.294 [2024-12-05 20:44:49.551911] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:56.294 [2024-12-05 20:44:49.551914] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10b6550): datao=0, datal=4096, cccid=7 00:24:56.294 [2024-12-05 20:44:49.551917] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1118b80) on tqpair(0x10b6550): expected_datao=0, payload_size=4096 00:24:56.294 [2024-12-05 20:44:49.551921] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.294 [2024-12-05 20:44:49.551930] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:56.294 [2024-12-05 20:44:49.551933] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:56.294 [2024-12-05 20:44:49.597070] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.294 [2024-12-05 20:44:49.597085] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.294 [2024-12-05 20:44:49.597088] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.294 [2024-12-05 20:44:49.597091] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118880) on tqpair=0x10b6550 00:24:56.294 [2024-12-05 20:44:49.597103] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.294 [2024-12-05 20:44:49.597107] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.294 [2024-12-05 20:44:49.597110] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.295 [2024-12-05 20:44:49.597113] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118700) on tqpair=0x10b6550 00:24:56.295 [2024-12-05 20:44:49.597123] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.295 [2024-12-05 20:44:49.597127] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.295 [2024-12-05 20:44:49.597130] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.295 [2024-12-05 20:44:49.597133] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118a00) on tqpair=0x10b6550 00:24:56.295 [2024-12-05 20:44:49.597138] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.295 [2024-12-05 20:44:49.597143] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.295 [2024-12-05 20:44:49.597145] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.295 [2024-12-05 20:44:49.597148] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118b80) on tqpair=0x10b6550 00:24:56.295 ===================================================== 00:24:56.295 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:56.295 ===================================================== 00:24:56.295 Controller Capabilities/Features 00:24:56.295 ================================ 00:24:56.295 Vendor ID: 8086 00:24:56.295 Subsystem Vendor ID: 8086 00:24:56.295 Serial Number: SPDK00000000000001 00:24:56.295 Model Number: SPDK bdev Controller 00:24:56.295 Firmware Version: 25.01 00:24:56.295 Recommended Arb Burst: 6 00:24:56.295 IEEE OUI Identifier: e4 d2 5c 00:24:56.295 Multi-path I/O 00:24:56.295 May have multiple subsystem ports: Yes 00:24:56.295 May have multiple controllers: Yes 00:24:56.295 Associated with SR-IOV VF: No 00:24:56.295 Max Data Transfer Size: 131072 00:24:56.295 Max Number of Namespaces: 32 00:24:56.295 Max Number of I/O Queues: 127 00:24:56.295 NVMe Specification Version (VS): 1.3 00:24:56.295 NVMe Specification Version (Identify): 1.3 00:24:56.295 Maximum Queue Entries: 128 00:24:56.295 Contiguous Queues Required: Yes 00:24:56.295 Arbitration Mechanisms Supported 00:24:56.295 Weighted Round Robin: Not Supported 00:24:56.295 Vendor Specific: Not Supported 00:24:56.295 Reset Timeout: 15000 ms 00:24:56.295 Doorbell Stride: 4 bytes 00:24:56.295 NVM Subsystem Reset: Not Supported 00:24:56.295 Command Sets Supported 00:24:56.295 NVM Command Set: Supported 00:24:56.295 Boot Partition: Not Supported 00:24:56.295 Memory Page Size Minimum: 4096 bytes 00:24:56.295 Memory Page Size Maximum: 4096 bytes 00:24:56.295 Persistent Memory Region: Not Supported 00:24:56.295 Optional Asynchronous Events Supported 00:24:56.295 Namespace Attribute Notices: Supported 00:24:56.295 Firmware Activation Notices: Not Supported 00:24:56.295 ANA Change Notices: Not Supported 00:24:56.295 PLE Aggregate Log Change Notices: Not Supported 00:24:56.295 LBA Status Info Alert Notices: Not Supported 00:24:56.295 EGE Aggregate Log Change Notices: Not Supported 00:24:56.295 Normal NVM Subsystem Shutdown event: Not Supported 00:24:56.295 Zone Descriptor Change Notices: Not Supported 00:24:56.295 Discovery Log Change Notices: Not Supported 00:24:56.295 Controller Attributes 00:24:56.295 128-bit Host Identifier: Supported 00:24:56.295 Non-Operational Permissive Mode: Not Supported 00:24:56.295 NVM Sets: Not Supported 00:24:56.295 Read Recovery Levels: Not Supported 00:24:56.295 Endurance Groups: Not Supported 00:24:56.295 Predictable Latency Mode: Not Supported 00:24:56.295 Traffic Based Keep ALive: Not Supported 00:24:56.295 Namespace Granularity: Not Supported 00:24:56.295 SQ Associations: Not Supported 00:24:56.295 UUID List: Not Supported 00:24:56.295 Multi-Domain Subsystem: Not Supported 00:24:56.295 Fixed Capacity Management: Not Supported 00:24:56.295 Variable Capacity Management: Not Supported 00:24:56.295 Delete Endurance Group: Not Supported 00:24:56.295 Delete NVM Set: Not Supported 00:24:56.295 Extended LBA Formats Supported: Not Supported 00:24:56.295 Flexible Data Placement Supported: Not Supported 00:24:56.295 00:24:56.295 Controller Memory Buffer Support 00:24:56.295 ================================ 00:24:56.295 Supported: No 00:24:56.295 00:24:56.295 Persistent Memory Region Support 00:24:56.295 ================================ 00:24:56.295 Supported: No 00:24:56.295 00:24:56.295 Admin Command Set Attributes 00:24:56.295 ============================ 00:24:56.295 Security Send/Receive: Not Supported 00:24:56.295 Format NVM: Not Supported 00:24:56.295 Firmware Activate/Download: Not Supported 00:24:56.295 Namespace Management: Not Supported 00:24:56.295 Device Self-Test: Not Supported 00:24:56.295 Directives: Not Supported 00:24:56.295 NVMe-MI: Not Supported 00:24:56.295 Virtualization Management: Not Supported 00:24:56.295 Doorbell Buffer Config: Not Supported 00:24:56.295 Get LBA Status Capability: Not Supported 00:24:56.295 Command & Feature Lockdown Capability: Not Supported 00:24:56.295 Abort Command Limit: 4 00:24:56.295 Async Event Request Limit: 4 00:24:56.295 Number of Firmware Slots: N/A 00:24:56.295 Firmware Slot 1 Read-Only: N/A 00:24:56.295 Firmware Activation Without Reset: N/A 00:24:56.295 Multiple Update Detection Support: N/A 00:24:56.295 Firmware Update Granularity: No Information Provided 00:24:56.295 Per-Namespace SMART Log: No 00:24:56.295 Asymmetric Namespace Access Log Page: Not Supported 00:24:56.295 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:56.295 Command Effects Log Page: Supported 00:24:56.295 Get Log Page Extended Data: Supported 00:24:56.295 Telemetry Log Pages: Not Supported 00:24:56.295 Persistent Event Log Pages: Not Supported 00:24:56.295 Supported Log Pages Log Page: May Support 00:24:56.295 Commands Supported & Effects Log Page: Not Supported 00:24:56.295 Feature Identifiers & Effects Log Page:May Support 00:24:56.295 NVMe-MI Commands & Effects Log Page: May Support 00:24:56.295 Data Area 4 for Telemetry Log: Not Supported 00:24:56.295 Error Log Page Entries Supported: 128 00:24:56.295 Keep Alive: Supported 00:24:56.295 Keep Alive Granularity: 10000 ms 00:24:56.295 00:24:56.295 NVM Command Set Attributes 00:24:56.295 ========================== 00:24:56.295 Submission Queue Entry Size 00:24:56.295 Max: 64 00:24:56.295 Min: 64 00:24:56.295 Completion Queue Entry Size 00:24:56.295 Max: 16 00:24:56.295 Min: 16 00:24:56.295 Number of Namespaces: 32 00:24:56.295 Compare Command: Supported 00:24:56.295 Write Uncorrectable Command: Not Supported 00:24:56.295 Dataset Management Command: Supported 00:24:56.295 Write Zeroes Command: Supported 00:24:56.295 Set Features Save Field: Not Supported 00:24:56.295 Reservations: Supported 00:24:56.295 Timestamp: Not Supported 00:24:56.295 Copy: Supported 00:24:56.295 Volatile Write Cache: Present 00:24:56.295 Atomic Write Unit (Normal): 1 00:24:56.295 Atomic Write Unit (PFail): 1 00:24:56.295 Atomic Compare & Write Unit: 1 00:24:56.295 Fused Compare & Write: Supported 00:24:56.295 Scatter-Gather List 00:24:56.295 SGL Command Set: Supported 00:24:56.295 SGL Keyed: Supported 00:24:56.295 SGL Bit Bucket Descriptor: Not Supported 00:24:56.295 SGL Metadata Pointer: Not Supported 00:24:56.295 Oversized SGL: Not Supported 00:24:56.295 SGL Metadata Address: Not Supported 00:24:56.295 SGL Offset: Supported 00:24:56.295 Transport SGL Data Block: Not Supported 00:24:56.295 Replay Protected Memory Block: Not Supported 00:24:56.295 00:24:56.295 Firmware Slot Information 00:24:56.296 ========================= 00:24:56.296 Active slot: 1 00:24:56.296 Slot 1 Firmware Revision: 25.01 00:24:56.296 00:24:56.296 00:24:56.296 Commands Supported and Effects 00:24:56.296 ============================== 00:24:56.296 Admin Commands 00:24:56.296 -------------- 00:24:56.296 Get Log Page (02h): Supported 00:24:56.296 Identify (06h): Supported 00:24:56.296 Abort (08h): Supported 00:24:56.296 Set Features (09h): Supported 00:24:56.296 Get Features (0Ah): Supported 00:24:56.296 Asynchronous Event Request (0Ch): Supported 00:24:56.296 Keep Alive (18h): Supported 00:24:56.296 I/O Commands 00:24:56.296 ------------ 00:24:56.296 Flush (00h): Supported LBA-Change 00:24:56.296 Write (01h): Supported LBA-Change 00:24:56.296 Read (02h): Supported 00:24:56.296 Compare (05h): Supported 00:24:56.296 Write Zeroes (08h): Supported LBA-Change 00:24:56.296 Dataset Management (09h): Supported LBA-Change 00:24:56.296 Copy (19h): Supported LBA-Change 00:24:56.296 00:24:56.296 Error Log 00:24:56.296 ========= 00:24:56.296 00:24:56.296 Arbitration 00:24:56.296 =========== 00:24:56.296 Arbitration Burst: 1 00:24:56.296 00:24:56.296 Power Management 00:24:56.296 ================ 00:24:56.296 Number of Power States: 1 00:24:56.296 Current Power State: Power State #0 00:24:56.296 Power State #0: 00:24:56.296 Max Power: 0.00 W 00:24:56.296 Non-Operational State: Operational 00:24:56.296 Entry Latency: Not Reported 00:24:56.296 Exit Latency: Not Reported 00:24:56.296 Relative Read Throughput: 0 00:24:56.296 Relative Read Latency: 0 00:24:56.296 Relative Write Throughput: 0 00:24:56.296 Relative Write Latency: 0 00:24:56.296 Idle Power: Not Reported 00:24:56.296 Active Power: Not Reported 00:24:56.296 Non-Operational Permissive Mode: Not Supported 00:24:56.296 00:24:56.296 Health Information 00:24:56.296 ================== 00:24:56.296 Critical Warnings: 00:24:56.296 Available Spare Space: OK 00:24:56.296 Temperature: OK 00:24:56.296 Device Reliability: OK 00:24:56.296 Read Only: No 00:24:56.296 Volatile Memory Backup: OK 00:24:56.296 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:56.296 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:56.296 Available Spare: 0% 00:24:56.296 Available Spare Threshold: 0% 00:24:56.296 Life Percentage Used:[2024-12-05 20:44:49.597221] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.296 [2024-12-05 20:44:49.597225] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x10b6550) 00:24:56.296 [2024-12-05 20:44:49.597231] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.296 [2024-12-05 20:44:49.597243] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118b80, cid 7, qid 0 00:24:56.296 [2024-12-05 20:44:49.597303] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.296 [2024-12-05 20:44:49.597308] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.296 [2024-12-05 20:44:49.597311] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.296 [2024-12-05 20:44:49.597314] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118b80) on tqpair=0x10b6550 00:24:56.296 [2024-12-05 20:44:49.597338] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:56.296 [2024-12-05 20:44:49.597347] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118100) on tqpair=0x10b6550 00:24:56.296 [2024-12-05 20:44:49.597352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.296 [2024-12-05 20:44:49.597356] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118280) on tqpair=0x10b6550 00:24:56.296 [2024-12-05 20:44:49.597359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.296 [2024-12-05 20:44:49.597363] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118400) on tqpair=0x10b6550 00:24:56.296 [2024-12-05 20:44:49.597366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.296 [2024-12-05 20:44:49.597370] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.296 [2024-12-05 20:44:49.597374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.296 [2024-12-05 20:44:49.597380] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.296 [2024-12-05 20:44:49.597383] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.296 [2024-12-05 20:44:49.597386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.296 [2024-12-05 20:44:49.597391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.296 [2024-12-05 20:44:49.597401] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.296 [2024-12-05 20:44:49.597463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.296 [2024-12-05 20:44:49.597468] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.296 [2024-12-05 20:44:49.597471] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.296 [2024-12-05 20:44:49.597474] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.296 [2024-12-05 20:44:49.597479] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.296 [2024-12-05 20:44:49.597485] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.296 [2024-12-05 20:44:49.597488] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.296 [2024-12-05 20:44:49.597493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.296 [2024-12-05 20:44:49.597505] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.296 [2024-12-05 20:44:49.597585] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.296 [2024-12-05 20:44:49.597590] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.296 [2024-12-05 20:44:49.597593] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.296 [2024-12-05 20:44:49.597596] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.296 [2024-12-05 20:44:49.597599] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:56.296 [2024-12-05 20:44:49.597603] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:56.296 [2024-12-05 20:44:49.597610] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.296 [2024-12-05 20:44:49.597613] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.296 [2024-12-05 20:44:49.597616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.296 [2024-12-05 20:44:49.597621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.296 [2024-12-05 20:44:49.597630] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.296 [2024-12-05 20:44:49.597702] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.296 [2024-12-05 20:44:49.597707] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.296 [2024-12-05 20:44:49.597710] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.296 [2024-12-05 20:44:49.597713] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.296 [2024-12-05 20:44:49.597720] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.296 [2024-12-05 20:44:49.597724] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.296 [2024-12-05 20:44:49.597726] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.296 [2024-12-05 20:44:49.597731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.296 [2024-12-05 20:44:49.597739] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.296 [2024-12-05 20:44:49.597793] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.296 [2024-12-05 20:44:49.597798] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.296 [2024-12-05 20:44:49.597801] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.296 [2024-12-05 20:44:49.597804] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.296 [2024-12-05 20:44:49.597811] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.296 [2024-12-05 20:44:49.597814] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.296 [2024-12-05 20:44:49.597817] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.296 [2024-12-05 20:44:49.597822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.296 [2024-12-05 20:44:49.597831] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.296 [2024-12-05 20:44:49.597886] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.296 [2024-12-05 20:44:49.597892] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.296 [2024-12-05 20:44:49.597894] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.296 [2024-12-05 20:44:49.597897] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.296 [2024-12-05 20:44:49.597906] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.296 [2024-12-05 20:44:49.597909] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.296 [2024-12-05 20:44:49.597912] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.296 [2024-12-05 20:44:49.597917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.296 [2024-12-05 20:44:49.597926] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.296 [2024-12-05 20:44:49.597980] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.297 [2024-12-05 20:44:49.597985] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.297 [2024-12-05 20:44:49.597988] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.297 [2024-12-05 20:44:49.597991] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.297 [2024-12-05 20:44:49.597999] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.297 [2024-12-05 20:44:49.598002] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.297 [2024-12-05 20:44:49.598005] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.297 [2024-12-05 20:44:49.598010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.297 [2024-12-05 20:44:49.598018] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.297 [2024-12-05 20:44:49.598080] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.297 [2024-12-05 20:44:49.598086] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.297 [2024-12-05 20:44:49.598089] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.297 [2024-12-05 20:44:49.598092] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.297 [2024-12-05 20:44:49.598099] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.297 [2024-12-05 20:44:49.598102] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.297 [2024-12-05 20:44:49.598106] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.297 [2024-12-05 20:44:49.598111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.297 [2024-12-05 20:44:49.598119] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.297 [2024-12-05 20:44:49.598177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.297 [2024-12-05 20:44:49.598182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.297 [2024-12-05 20:44:49.598185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.297 [2024-12-05 20:44:49.598188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.297 [2024-12-05 20:44:49.598195] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.297 [2024-12-05 20:44:49.598198] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.297 [2024-12-05 20:44:49.598201] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.297 [2024-12-05 20:44:49.598206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.297 [2024-12-05 20:44:49.598216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.297 [2024-12-05 20:44:49.598272] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.297 [2024-12-05 20:44:49.598277] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.297 [2024-12-05 20:44:49.598280] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.297 [2024-12-05 20:44:49.598283] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.297 [2024-12-05 20:44:49.598290] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.297 [2024-12-05 20:44:49.598295] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.297 [2024-12-05 20:44:49.598297] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.297 [2024-12-05 20:44:49.598302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.297 [2024-12-05 20:44:49.598311] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.297 [2024-12-05 20:44:49.598366] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.297 [2024-12-05 20:44:49.598371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.297 [2024-12-05 20:44:49.598374] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.297 [2024-12-05 20:44:49.598377] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.297 [2024-12-05 20:44:49.598384] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.297 [2024-12-05 20:44:49.598387] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.297 [2024-12-05 20:44:49.598390] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.297 [2024-12-05 20:44:49.598395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.297 [2024-12-05 20:44:49.598403] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.297 [2024-12-05 20:44:49.598459] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.297 [2024-12-05 20:44:49.598463] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.297 [2024-12-05 20:44:49.598466] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.297 [2024-12-05 20:44:49.598469] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.297 [2024-12-05 20:44:49.598476] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.297 [2024-12-05 20:44:49.598479] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.297 [2024-12-05 20:44:49.598482] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.297 [2024-12-05 20:44:49.598487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.297 [2024-12-05 20:44:49.598495] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.297 [2024-12-05 20:44:49.598564] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.297 [2024-12-05 20:44:49.598570] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.297 [2024-12-05 20:44:49.598572] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.297 [2024-12-05 20:44:49.598575] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.297 [2024-12-05 20:44:49.598583] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.297 [2024-12-05 20:44:49.598586] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.297 [2024-12-05 20:44:49.598588] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.297 [2024-12-05 20:44:49.598593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.297 [2024-12-05 20:44:49.598603] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.297 [2024-12-05 20:44:49.598662] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.297 [2024-12-05 20:44:49.598667] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.297 [2024-12-05 20:44:49.598670] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.297 [2024-12-05 20:44:49.598673] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.297 [2024-12-05 20:44:49.598681] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.297 [2024-12-05 20:44:49.598684] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.297 [2024-12-05 20:44:49.598688] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.297 [2024-12-05 20:44:49.598693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.297 [2024-12-05 20:44:49.598702] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.297 [2024-12-05 20:44:49.598763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.297 [2024-12-05 20:44:49.598769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.297 [2024-12-05 20:44:49.598771] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.297 [2024-12-05 20:44:49.598774] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.297 [2024-12-05 20:44:49.598781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.297 [2024-12-05 20:44:49.598785] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.297 [2024-12-05 20:44:49.598787] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.297 [2024-12-05 20:44:49.598792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.297 [2024-12-05 20:44:49.598800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.297 [2024-12-05 20:44:49.598857] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.297 [2024-12-05 20:44:49.598862] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.297 [2024-12-05 20:44:49.598865] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.297 [2024-12-05 20:44:49.598867] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.297 [2024-12-05 20:44:49.598875] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.297 [2024-12-05 20:44:49.598878] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.297 [2024-12-05 20:44:49.598881] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.298 [2024-12-05 20:44:49.598886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.298 [2024-12-05 20:44:49.598894] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.298 [2024-12-05 20:44:49.598951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.298 [2024-12-05 20:44:49.598956] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.298 [2024-12-05 20:44:49.598958] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.598961] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.298 [2024-12-05 20:44:49.598969] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.598972] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.598974] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.298 [2024-12-05 20:44:49.598979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.298 [2024-12-05 20:44:49.598988] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.298 [2024-12-05 20:44:49.599044] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.298 [2024-12-05 20:44:49.599049] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.298 [2024-12-05 20:44:49.599052] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.599055] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.298 [2024-12-05 20:44:49.599068] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.599071] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.599074] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.298 [2024-12-05 20:44:49.599081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.298 [2024-12-05 20:44:49.599090] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.298 [2024-12-05 20:44:49.599162] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.298 [2024-12-05 20:44:49.599167] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.298 [2024-12-05 20:44:49.599170] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.599172] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.298 [2024-12-05 20:44:49.599179] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.599183] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.599185] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.298 [2024-12-05 20:44:49.599190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.298 [2024-12-05 20:44:49.599198] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.298 [2024-12-05 20:44:49.599257] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.298 [2024-12-05 20:44:49.599261] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.298 [2024-12-05 20:44:49.599264] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.599267] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.298 [2024-12-05 20:44:49.599274] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.599277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.599280] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.298 [2024-12-05 20:44:49.599285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.298 [2024-12-05 20:44:49.599293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.298 [2024-12-05 20:44:49.599350] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.298 [2024-12-05 20:44:49.599355] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.298 [2024-12-05 20:44:49.599358] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.599361] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.298 [2024-12-05 20:44:49.599368] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.599371] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.599373] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.298 [2024-12-05 20:44:49.599378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.298 [2024-12-05 20:44:49.599387] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.298 [2024-12-05 20:44:49.599445] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.298 [2024-12-05 20:44:49.599450] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.298 [2024-12-05 20:44:49.599453] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.599456] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.298 [2024-12-05 20:44:49.599463] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.599466] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.599469] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.298 [2024-12-05 20:44:49.599474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.298 [2024-12-05 20:44:49.599484] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.298 [2024-12-05 20:44:49.599540] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.298 [2024-12-05 20:44:49.599545] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.298 [2024-12-05 20:44:49.599547] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.599550] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.298 [2024-12-05 20:44:49.599557] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.599560] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.599563] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.298 [2024-12-05 20:44:49.599568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.298 [2024-12-05 20:44:49.599576] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.298 [2024-12-05 20:44:49.599631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.298 [2024-12-05 20:44:49.599636] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.298 [2024-12-05 20:44:49.599639] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.599641] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.298 [2024-12-05 20:44:49.599648] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.599652] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.599654] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.298 [2024-12-05 20:44:49.599659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.298 [2024-12-05 20:44:49.599668] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.298 [2024-12-05 20:44:49.599724] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.298 [2024-12-05 20:44:49.599729] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.298 [2024-12-05 20:44:49.599731] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.599734] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.298 [2024-12-05 20:44:49.599742] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.599745] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.599747] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.298 [2024-12-05 20:44:49.599752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.298 [2024-12-05 20:44:49.599761] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.298 [2024-12-05 20:44:49.599820] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.298 [2024-12-05 20:44:49.599825] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.298 [2024-12-05 20:44:49.599827] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.599830] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.298 [2024-12-05 20:44:49.599837] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.599840] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.599843] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.298 [2024-12-05 20:44:49.599848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.298 [2024-12-05 20:44:49.599857] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.298 [2024-12-05 20:44:49.599914] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.298 [2024-12-05 20:44:49.599919] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.298 [2024-12-05 20:44:49.599921] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.599924] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.298 [2024-12-05 20:44:49.599931] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.599934] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.298 [2024-12-05 20:44:49.599937] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.298 [2024-12-05 20:44:49.599942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.298 [2024-12-05 20:44:49.599951] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.298 [2024-12-05 20:44:49.600005] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.298 [2024-12-05 20:44:49.600010] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.299 [2024-12-05 20:44:49.600013] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600016] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.299 [2024-12-05 20:44:49.600023] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600026] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600029] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.299 [2024-12-05 20:44:49.600034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.299 [2024-12-05 20:44:49.600042] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.299 [2024-12-05 20:44:49.600105] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.299 [2024-12-05 20:44:49.600111] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.299 [2024-12-05 20:44:49.600113] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600116] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.299 [2024-12-05 20:44:49.600124] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600127] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600130] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.299 [2024-12-05 20:44:49.600135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.299 [2024-12-05 20:44:49.600143] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.299 [2024-12-05 20:44:49.600195] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.299 [2024-12-05 20:44:49.600200] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.299 [2024-12-05 20:44:49.600203] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600206] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.299 [2024-12-05 20:44:49.600213] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600216] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600219] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.299 [2024-12-05 20:44:49.600224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.299 [2024-12-05 20:44:49.600232] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.299 [2024-12-05 20:44:49.600292] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.299 [2024-12-05 20:44:49.600297] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.299 [2024-12-05 20:44:49.600299] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.299 [2024-12-05 20:44:49.600310] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600313] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600316] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.299 [2024-12-05 20:44:49.600321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.299 [2024-12-05 20:44:49.600329] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.299 [2024-12-05 20:44:49.600386] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.299 [2024-12-05 20:44:49.600392] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.299 [2024-12-05 20:44:49.600394] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600397] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.299 [2024-12-05 20:44:49.600404] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600407] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600410] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.299 [2024-12-05 20:44:49.600415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.299 [2024-12-05 20:44:49.600423] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.299 [2024-12-05 20:44:49.600479] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.299 [2024-12-05 20:44:49.600484] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.299 [2024-12-05 20:44:49.600487] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600490] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.299 [2024-12-05 20:44:49.600497] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600501] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600503] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.299 [2024-12-05 20:44:49.600508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.299 [2024-12-05 20:44:49.600517] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.299 [2024-12-05 20:44:49.600574] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.299 [2024-12-05 20:44:49.600579] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.299 [2024-12-05 20:44:49.600581] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600584] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.299 [2024-12-05 20:44:49.600592] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600595] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600597] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.299 [2024-12-05 20:44:49.600602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.299 [2024-12-05 20:44:49.600610] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.299 [2024-12-05 20:44:49.600670] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.299 [2024-12-05 20:44:49.600676] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.299 [2024-12-05 20:44:49.600679] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600682] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.299 [2024-12-05 20:44:49.600689] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600692] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600695] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.299 [2024-12-05 20:44:49.600700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.299 [2024-12-05 20:44:49.600708] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.299 [2024-12-05 20:44:49.600763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.299 [2024-12-05 20:44:49.600768] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.299 [2024-12-05 20:44:49.600770] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600773] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.299 [2024-12-05 20:44:49.600781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600784] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600786] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.299 [2024-12-05 20:44:49.600792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.299 [2024-12-05 20:44:49.600800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.299 [2024-12-05 20:44:49.600856] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.299 [2024-12-05 20:44:49.600861] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.299 [2024-12-05 20:44:49.600864] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600867] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.299 [2024-12-05 20:44:49.600874] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600877] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600880] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.299 [2024-12-05 20:44:49.600885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.299 [2024-12-05 20:44:49.600894] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.299 [2024-12-05 20:44:49.600953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.299 [2024-12-05 20:44:49.600958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.299 [2024-12-05 20:44:49.600960] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600963] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.299 [2024-12-05 20:44:49.600971] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600974] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.600976] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.299 [2024-12-05 20:44:49.600982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.299 [2024-12-05 20:44:49.600990] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.299 [2024-12-05 20:44:49.601044] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.299 [2024-12-05 20:44:49.601049] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.299 [2024-12-05 20:44:49.601053] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.299 [2024-12-05 20:44:49.601056] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.300 [2024-12-05 20:44:49.605073] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.300 [2024-12-05 20:44:49.605077] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.300 [2024-12-05 20:44:49.605080] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6550) 00:24:56.300 [2024-12-05 20:44:49.605085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.300 [2024-12-05 20:44:49.605095] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1118580, cid 3, qid 0 00:24:56.300 [2024-12-05 20:44:49.605199] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.300 [2024-12-05 20:44:49.605204] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.300 [2024-12-05 20:44:49.605207] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.300 [2024-12-05 20:44:49.605209] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1118580) on tqpair=0x10b6550 00:24:56.300 [2024-12-05 20:44:49.605215] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:24:56.300 0% 00:24:56.300 Data Units Read: 0 00:24:56.300 Data Units Written: 0 00:24:56.300 Host Read Commands: 0 00:24:56.300 Host Write Commands: 0 00:24:56.300 Controller Busy Time: 0 minutes 00:24:56.300 Power Cycles: 0 00:24:56.300 Power On Hours: 0 hours 00:24:56.300 Unsafe Shutdowns: 0 00:24:56.300 Unrecoverable Media Errors: 0 00:24:56.300 Lifetime Error Log Entries: 0 00:24:56.300 Warning Temperature Time: 0 minutes 00:24:56.300 Critical Temperature Time: 0 minutes 00:24:56.300 00:24:56.300 Number of Queues 00:24:56.300 ================ 00:24:56.300 Number of I/O Submission Queues: 127 00:24:56.300 Number of I/O Completion Queues: 127 00:24:56.300 00:24:56.300 Active Namespaces 00:24:56.300 ================= 00:24:56.300 Namespace ID:1 00:24:56.300 Error Recovery Timeout: Unlimited 00:24:56.300 Command Set Identifier: NVM (00h) 00:24:56.300 Deallocate: Supported 00:24:56.300 Deallocated/Unwritten Error: Not Supported 00:24:56.300 Deallocated Read Value: Unknown 00:24:56.300 Deallocate in Write Zeroes: Not Supported 00:24:56.300 Deallocated Guard Field: 0xFFFF 00:24:56.300 Flush: Supported 00:24:56.300 Reservation: Supported 00:24:56.300 Namespace Sharing Capabilities: Multiple Controllers 00:24:56.300 Size (in LBAs): 131072 (0GiB) 00:24:56.300 Capacity (in LBAs): 131072 (0GiB) 00:24:56.300 Utilization (in LBAs): 131072 (0GiB) 00:24:56.300 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:56.300 EUI64: ABCDEF0123456789 00:24:56.300 UUID: 20c054bf-b6d9-4385-8c71-15fc5459555e 00:24:56.300 Thin Provisioning: Not Supported 00:24:56.300 Per-NS Atomic Units: Yes 00:24:56.300 Atomic Boundary Size (Normal): 0 00:24:56.300 Atomic Boundary Size (PFail): 0 00:24:56.300 Atomic Boundary Offset: 0 00:24:56.300 Maximum Single Source Range Length: 65535 00:24:56.300 Maximum Copy Length: 65535 00:24:56.300 Maximum Source Range Count: 1 00:24:56.300 NGUID/EUI64 Never Reused: No 00:24:56.300 Namespace Write Protected: No 00:24:56.300 Number of LBA Formats: 1 00:24:56.300 Current LBA Format: LBA Format #00 00:24:56.300 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:56.300 00:24:56.300 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:56.300 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:56.300 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.300 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:56.300 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.300 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:56.300 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:56.300 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:56.300 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:56.300 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:56.300 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:56.300 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:56.300 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:56.300 rmmod nvme_tcp 00:24:56.300 rmmod nvme_fabrics 00:24:56.300 rmmod nvme_keyring 00:24:56.300 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:56.300 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:56.300 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:56.300 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 451961 ']' 00:24:56.300 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 451961 00:24:56.300 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 451961 ']' 00:24:56.300 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 451961 00:24:56.300 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:24:56.300 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:56.300 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 451961 00:24:56.558 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:56.558 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:56.558 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 451961' 00:24:56.558 killing process with pid 451961 00:24:56.558 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 451961 00:24:56.558 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 451961 00:24:56.559 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:56.559 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:56.559 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:56.559 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:56.559 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:24:56.559 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:56.559 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:24:56.559 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:56.559 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:56.559 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.559 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:56.559 20:44:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.091 20:44:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:59.091 00:24:59.091 real 0m10.092s 00:24:59.091 user 0m8.362s 00:24:59.091 sys 0m4.938s 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:59.091 ************************************ 00:24:59.091 END TEST nvmf_identify 00:24:59.091 ************************************ 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.091 ************************************ 00:24:59.091 START TEST nvmf_perf 00:24:59.091 ************************************ 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:59.091 * Looking for test storage... 00:24:59.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:59.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.091 --rc genhtml_branch_coverage=1 00:24:59.091 --rc genhtml_function_coverage=1 00:24:59.091 --rc genhtml_legend=1 00:24:59.091 --rc geninfo_all_blocks=1 00:24:59.091 --rc geninfo_unexecuted_blocks=1 00:24:59.091 00:24:59.091 ' 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:59.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.091 --rc genhtml_branch_coverage=1 00:24:59.091 --rc genhtml_function_coverage=1 00:24:59.091 --rc genhtml_legend=1 00:24:59.091 --rc geninfo_all_blocks=1 00:24:59.091 --rc geninfo_unexecuted_blocks=1 00:24:59.091 00:24:59.091 ' 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:59.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.091 --rc genhtml_branch_coverage=1 00:24:59.091 --rc genhtml_function_coverage=1 00:24:59.091 --rc genhtml_legend=1 00:24:59.091 --rc geninfo_all_blocks=1 00:24:59.091 --rc geninfo_unexecuted_blocks=1 00:24:59.091 00:24:59.091 ' 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:59.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.091 --rc genhtml_branch_coverage=1 00:24:59.091 --rc genhtml_function_coverage=1 00:24:59.091 --rc genhtml_legend=1 00:24:59.091 --rc geninfo_all_blocks=1 00:24:59.091 --rc geninfo_unexecuted_blocks=1 00:24:59.091 00:24:59.091 ' 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:59.091 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:59.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:59.092 20:44:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:05.659 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.659 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:05.659 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:05.659 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:05.659 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:05.659 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:05.659 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:05.659 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:25:05.659 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:05.659 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:25:05.659 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:25:05.659 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:25:05.659 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:25:05.659 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:25:05.659 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:05.659 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.659 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.659 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.659 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.659 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.659 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.659 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.659 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:05.659 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.659 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.659 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.659 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.659 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:05.659 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:05.660 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:05.660 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:05.660 Found net devices under 0000:af:00.0: cvl_0_0 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:05.660 Found net devices under 0000:af:00.1: cvl_0_1 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.660 20:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.660 20:44:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.660 20:44:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.660 20:44:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:05.660 20:44:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:05.660 20:44:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:05.660 20:44:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:05.660 20:44:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:05.660 20:44:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:05.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:25:05.660 00:25:05.660 --- 10.0.0.2 ping statistics --- 00:25:05.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.660 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:25:05.660 20:44:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:05.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:25:05.660 00:25:05.660 --- 10.0.0.1 ping statistics --- 00:25:05.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.660 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:25:05.660 20:44:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.661 20:44:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:25:05.661 20:44:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:05.661 20:44:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.661 20:44:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:05.661 20:44:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:05.661 20:44:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.661 20:44:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:05.661 20:44:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:05.661 20:44:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:05.661 20:44:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:05.661 20:44:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:05.661 20:44:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:05.661 20:44:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=455934 00:25:05.661 20:44:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 455934 00:25:05.661 20:44:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:05.661 20:44:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 455934 ']' 00:25:05.661 20:44:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.661 20:44:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:05.661 20:44:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.661 20:44:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:05.661 20:44:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:05.661 [2024-12-05 20:44:58.311077] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:25:05.661 [2024-12-05 20:44:58.311116] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.661 [2024-12-05 20:44:58.384420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:05.661 [2024-12-05 20:44:58.424859] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:05.661 [2024-12-05 20:44:58.424892] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:05.661 [2024-12-05 20:44:58.424899] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:05.661 [2024-12-05 20:44:58.424904] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:05.661 [2024-12-05 20:44:58.424908] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:05.661 [2024-12-05 20:44:58.426474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.661 [2024-12-05 20:44:58.426589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:05.661 [2024-12-05 20:44:58.426736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.661 [2024-12-05 20:44:58.426737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:05.919 20:44:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:05.920 20:44:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:25:05.920 20:44:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:05.920 20:44:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:05.920 20:44:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:05.920 20:44:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:05.920 20:44:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:05.920 20:44:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:09.206 20:45:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:09.206 20:45:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:09.206 20:45:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:86:00.0 00:25:09.206 20:45:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:09.206 20:45:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:09.206 20:45:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:86:00.0 ']' 00:25:09.206 20:45:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:09.206 20:45:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:09.206 20:45:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:09.464 [2024-12-05 20:45:02.742561] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:09.465 20:45:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:09.723 20:45:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:09.723 20:45:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:09.723 20:45:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:09.723 20:45:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:09.981 20:45:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:10.240 [2024-12-05 20:45:03.499626] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:10.240 20:45:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:10.499 20:45:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:86:00.0 ']' 00:25:10.499 20:45:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:86:00.0' 00:25:10.499 20:45:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:10.499 20:45:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:86:00.0' 00:25:11.872 Initializing NVMe Controllers 00:25:11.872 Attached to NVMe Controller at 0000:86:00.0 [8086:0a54] 00:25:11.873 Associating PCIE (0000:86:00.0) NSID 1 with lcore 0 00:25:11.873 Initialization complete. Launching workers. 00:25:11.873 ======================================================== 00:25:11.873 Latency(us) 00:25:11.873 Device Information : IOPS MiB/s Average min max 00:25:11.873 PCIE (0000:86:00.0) NSID 1 from core 0: 104798.81 409.37 304.88 10.65 7199.61 00:25:11.873 ======================================================== 00:25:11.873 Total : 104798.81 409.37 304.88 10.65 7199.61 00:25:11.873 00:25:11.873 20:45:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:13.246 Initializing NVMe Controllers 00:25:13.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:13.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:13.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:13.246 Initialization complete. Launching workers. 00:25:13.246 ======================================================== 00:25:13.246 Latency(us) 00:25:13.246 Device Information : IOPS MiB/s Average min max 00:25:13.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 72.74 0.28 13972.94 104.06 45686.30 00:25:13.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.82 0.20 19438.15 7358.36 47885.42 00:25:13.247 ======================================================== 00:25:13.247 Total : 124.56 0.49 16246.46 104.06 47885.42 00:25:13.247 00:25:13.247 20:45:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:14.621 Initializing NVMe Controllers 00:25:14.621 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:14.621 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:14.621 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:14.621 Initialization complete. Launching workers. 00:25:14.621 ======================================================== 00:25:14.621 Latency(us) 00:25:14.621 Device Information : IOPS MiB/s Average min max 00:25:14.621 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12230.46 47.78 2615.84 368.05 6035.31 00:25:14.621 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3943.99 15.41 8140.00 6544.04 15852.59 00:25:14.621 ======================================================== 00:25:14.621 Total : 16174.46 63.18 3962.86 368.05 15852.59 00:25:14.621 00:25:14.621 20:45:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:14.621 20:45:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:14.621 20:45:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:17.152 Initializing NVMe Controllers 00:25:17.153 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:17.153 Controller IO queue size 128, less than required. 00:25:17.153 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:17.153 Controller IO queue size 128, less than required. 00:25:17.153 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:17.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:17.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:17.153 Initialization complete. Launching workers. 00:25:17.153 ======================================================== 00:25:17.153 Latency(us) 00:25:17.153 Device Information : IOPS MiB/s Average min max 00:25:17.153 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1955.84 488.96 66472.24 37816.34 125754.56 00:25:17.153 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 602.22 150.55 221610.72 119433.44 347318.57 00:25:17.153 ======================================================== 00:25:17.153 Total : 2558.06 639.52 102994.99 37816.34 347318.57 00:25:17.153 00:25:17.153 20:45:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:17.153 No valid NVMe controllers or AIO or URING devices found 00:25:17.153 Initializing NVMe Controllers 00:25:17.153 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:17.153 Controller IO queue size 128, less than required. 00:25:17.153 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:17.153 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:17.153 Controller IO queue size 128, less than required. 00:25:17.153 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:17.153 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:17.153 WARNING: Some requested NVMe devices were skipped 00:25:17.153 20:45:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:19.687 Initializing NVMe Controllers 00:25:19.687 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:19.688 Controller IO queue size 128, less than required. 00:25:19.688 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:19.688 Controller IO queue size 128, less than required. 00:25:19.688 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:19.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:19.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:19.688 Initialization complete. Launching workers. 00:25:19.688 00:25:19.688 ==================== 00:25:19.688 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:19.688 TCP transport: 00:25:19.688 polls: 12802 00:25:19.688 idle_polls: 8963 00:25:19.688 sock_completions: 3839 00:25:19.688 nvme_completions: 6959 00:25:19.688 submitted_requests: 10364 00:25:19.688 queued_requests: 1 00:25:19.688 00:25:19.688 ==================== 00:25:19.688 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:19.688 TCP transport: 00:25:19.688 polls: 17591 00:25:19.688 idle_polls: 12796 00:25:19.688 sock_completions: 4795 00:25:19.688 nvme_completions: 6915 00:25:19.688 submitted_requests: 10238 00:25:19.688 queued_requests: 1 00:25:19.688 ======================================================== 00:25:19.688 Latency(us) 00:25:19.688 Device Information : IOPS MiB/s Average min max 00:25:19.688 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1739.38 434.84 75000.72 50924.09 118690.91 00:25:19.688 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1728.38 432.09 75242.70 41089.29 126020.18 00:25:19.688 ======================================================== 00:25:19.688 Total : 3467.76 866.94 75121.32 41089.29 126020.18 00:25:19.688 00:25:19.688 20:45:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:19.688 20:45:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:19.946 20:45:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:19.946 20:45:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:19.946 20:45:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:19.946 20:45:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:19.946 20:45:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:25:19.946 20:45:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:19.946 20:45:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:25:19.946 20:45:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:19.946 20:45:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:19.946 rmmod nvme_tcp 00:25:19.946 rmmod nvme_fabrics 00:25:19.946 rmmod nvme_keyring 00:25:19.946 20:45:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:19.946 20:45:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:25:19.946 20:45:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:25:19.946 20:45:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 455934 ']' 00:25:19.946 20:45:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 455934 00:25:19.946 20:45:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 455934 ']' 00:25:19.946 20:45:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 455934 00:25:19.946 20:45:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:25:19.946 20:45:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:19.946 20:45:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 455934 00:25:19.946 20:45:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:19.946 20:45:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:19.946 20:45:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 455934' 00:25:19.946 killing process with pid 455934 00:25:19.946 20:45:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 455934 00:25:19.946 20:45:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 455934 00:25:21.857 20:45:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:21.857 20:45:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:21.857 20:45:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:21.857 20:45:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:25:21.857 20:45:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:25:21.857 20:45:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:21.857 20:45:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:25:21.858 20:45:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:21.858 20:45:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:21.858 20:45:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.858 20:45:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:21.858 20:45:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.766 20:45:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:23.766 00:25:23.766 real 0m24.806s 00:25:23.766 user 1m5.127s 00:25:23.766 sys 0m8.292s 00:25:23.766 20:45:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:23.766 20:45:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:23.766 ************************************ 00:25:23.766 END TEST nvmf_perf 00:25:23.766 ************************************ 00:25:23.766 20:45:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:23.766 20:45:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:23.766 20:45:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:23.766 20:45:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.766 ************************************ 00:25:23.766 START TEST nvmf_fio_host 00:25:23.766 ************************************ 00:25:23.766 20:45:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:23.766 * Looking for test storage... 00:25:23.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:23.766 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:23.766 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:25:23.766 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:23.766 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:23.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.767 --rc genhtml_branch_coverage=1 00:25:23.767 --rc genhtml_function_coverage=1 00:25:23.767 --rc genhtml_legend=1 00:25:23.767 --rc geninfo_all_blocks=1 00:25:23.767 --rc geninfo_unexecuted_blocks=1 00:25:23.767 00:25:23.767 ' 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:23.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.767 --rc genhtml_branch_coverage=1 00:25:23.767 --rc genhtml_function_coverage=1 00:25:23.767 --rc genhtml_legend=1 00:25:23.767 --rc geninfo_all_blocks=1 00:25:23.767 --rc geninfo_unexecuted_blocks=1 00:25:23.767 00:25:23.767 ' 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:23.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.767 --rc genhtml_branch_coverage=1 00:25:23.767 --rc genhtml_function_coverage=1 00:25:23.767 --rc genhtml_legend=1 00:25:23.767 --rc geninfo_all_blocks=1 00:25:23.767 --rc geninfo_unexecuted_blocks=1 00:25:23.767 00:25:23.767 ' 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:23.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.767 --rc genhtml_branch_coverage=1 00:25:23.767 --rc genhtml_function_coverage=1 00:25:23.767 --rc genhtml_legend=1 00:25:23.767 --rc geninfo_all_blocks=1 00:25:23.767 --rc geninfo_unexecuted_blocks=1 00:25:23.767 00:25:23.767 ' 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:23.767 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.768 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.768 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.768 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:23.768 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.768 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:25:23.768 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:23.768 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:23.768 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:23.768 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:23.768 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:23.768 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:23.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:23.768 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:23.768 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:23.768 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:23.768 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:23.768 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:23.768 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:23.768 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:23.768 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:23.768 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:23.768 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:23.768 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.768 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:23.768 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.768 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:23.768 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:23.768 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:23.768 20:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:30.340 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:30.340 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:30.340 Found net devices under 0000:af:00.0: cvl_0_0 00:25:30.340 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:30.341 Found net devices under 0000:af:00.1: cvl_0_1 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:30.341 20:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:30.341 20:45:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:30.341 20:45:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:30.341 20:45:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:30.341 20:45:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:30.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:30.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:25:30.341 00:25:30.341 --- 10.0.0.2 ping statistics --- 00:25:30.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.341 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:25:30.341 20:45:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:30.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:30.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:25:30.341 00:25:30.341 --- 10.0.0.1 ping statistics --- 00:25:30.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.341 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:25:30.341 20:45:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:30.341 20:45:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:25:30.341 20:45:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:30.341 20:45:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:30.341 20:45:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:30.341 20:45:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:30.341 20:45:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:30.341 20:45:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:30.341 20:45:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:30.341 20:45:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:30.341 20:45:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:30.341 20:45:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:30.341 20:45:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.341 20:45:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=463097 00:25:30.341 20:45:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:30.341 20:45:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:30.341 20:45:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 463097 00:25:30.341 20:45:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 463097 ']' 00:25:30.341 20:45:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.341 20:45:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:30.341 20:45:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.341 20:45:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:30.341 20:45:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.341 [2024-12-05 20:45:23.202499] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:25:30.341 [2024-12-05 20:45:23.202544] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:30.341 [2024-12-05 20:45:23.278961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:30.341 [2024-12-05 20:45:23.317852] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:30.341 [2024-12-05 20:45:23.317890] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:30.341 [2024-12-05 20:45:23.317896] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:30.341 [2024-12-05 20:45:23.317901] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:30.341 [2024-12-05 20:45:23.317906] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:30.341 [2024-12-05 20:45:23.319347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:30.341 [2024-12-05 20:45:23.319459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:30.341 [2024-12-05 20:45:23.319571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.341 [2024-12-05 20:45:23.319572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:30.600 20:45:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:30.600 20:45:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:25:30.600 20:45:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:30.859 [2024-12-05 20:45:24.173818] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:30.859 20:45:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:30.859 20:45:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:30.859 20:45:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.859 20:45:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:31.117 Malloc1 00:25:31.117 20:45:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:31.376 20:45:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:31.376 20:45:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:31.634 [2024-12-05 20:45:24.963379] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:31.634 20:45:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:31.892 20:45:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:31.892 20:45:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:31.892 20:45:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:31.892 20:45:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:31.892 20:45:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:31.892 20:45:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:31.892 20:45:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:31.892 20:45:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:31.892 20:45:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:31.892 20:45:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:31.892 20:45:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:31.892 20:45:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:31.892 20:45:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:31.892 20:45:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:31.892 20:45:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:31.892 20:45:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:31.892 20:45:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:31.892 20:45:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:31.892 20:45:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:31.892 20:45:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:31.892 20:45:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:31.892 20:45:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:31.892 20:45:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:32.150 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:32.150 fio-3.35 00:25:32.150 Starting 1 thread 00:25:34.699 00:25:34.699 test: (groupid=0, jobs=1): err= 0: pid=463747: Thu Dec 5 20:45:27 2024 00:25:34.699 read: IOPS=13.0k, BW=50.8MiB/s (53.2MB/s)(102MiB/2005msec) 00:25:34.699 slat (nsec): min=1418, max=240565, avg=1562.28, stdev=2115.83 00:25:34.699 clat (usec): min=2634, max=9175, avg=5418.91, stdev=396.62 00:25:34.699 lat (usec): min=2660, max=9177, avg=5420.47, stdev=396.46 00:25:34.699 clat percentiles (usec): 00:25:34.699 | 1.00th=[ 4490], 5.00th=[ 4817], 10.00th=[ 4948], 20.00th=[ 5080], 00:25:34.699 | 30.00th=[ 5211], 40.00th=[ 5342], 50.00th=[ 5407], 60.00th=[ 5538], 00:25:34.699 | 70.00th=[ 5604], 80.00th=[ 5735], 90.00th=[ 5932], 95.00th=[ 5997], 00:25:34.699 | 99.00th=[ 6259], 99.50th=[ 6390], 99.90th=[ 7504], 99.95th=[ 7963], 00:25:34.699 | 99.99th=[ 9110] 00:25:34.699 bw ( KiB/s): min=50968, max=52600, per=100.00%, avg=52014.00, stdev=719.39, samples=4 00:25:34.699 iops : min=12742, max=13150, avg=13003.50, stdev=179.85, samples=4 00:25:34.699 write: IOPS=13.0k, BW=50.8MiB/s (53.2MB/s)(102MiB/2005msec); 0 zone resets 00:25:34.699 slat (nsec): min=1455, max=153447, avg=1612.04, stdev=1157.12 00:25:34.699 clat (usec): min=2191, max=8612, avg=4367.38, stdev=332.08 00:25:34.699 lat (usec): min=2207, max=8613, avg=4368.99, stdev=331.98 00:25:34.699 clat percentiles (usec): 00:25:34.699 | 1.00th=[ 3621], 5.00th=[ 3851], 10.00th=[ 3982], 20.00th=[ 4113], 00:25:34.699 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 00:25:34.699 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 4883], 00:25:34.699 | 99.00th=[ 5080], 99.50th=[ 5211], 99.90th=[ 6783], 99.95th=[ 7767], 00:25:34.699 | 99.99th=[ 8029] 00:25:34.699 bw ( KiB/s): min=51264, max=52480, per=99.99%, avg=51984.00, stdev=523.54, samples=4 00:25:34.699 iops : min=12816, max=13120, avg=12996.00, stdev=130.88, samples=4 00:25:34.699 lat (msec) : 4=5.81%, 10=94.19% 00:25:34.699 cpu : usr=74.30%, sys=24.75%, ctx=83, majf=0, minf=2 00:25:34.699 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:34.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:34.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:34.699 issued rwts: total=26066,26060,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:34.699 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:34.699 00:25:34.699 Run status group 0 (all jobs): 00:25:34.699 READ: bw=50.8MiB/s (53.2MB/s), 50.8MiB/s-50.8MiB/s (53.2MB/s-53.2MB/s), io=102MiB (107MB), run=2005-2005msec 00:25:34.699 WRITE: bw=50.8MiB/s (53.2MB/s), 50.8MiB/s-50.8MiB/s (53.2MB/s-53.2MB/s), io=102MiB (107MB), run=2005-2005msec 00:25:34.699 20:45:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:34.699 20:45:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:34.699 20:45:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:34.699 20:45:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:34.699 20:45:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:34.699 20:45:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:34.699 20:45:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:34.699 20:45:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:34.699 20:45:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:34.699 20:45:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:34.699 20:45:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:34.699 20:45:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:34.699 20:45:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:34.699 20:45:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:34.699 20:45:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:34.699 20:45:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:34.699 20:45:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:34.699 20:45:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:34.699 20:45:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:34.699 20:45:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:34.699 20:45:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:34.699 20:45:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:34.958 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:34.958 fio-3.35 00:25:34.958 Starting 1 thread 00:25:37.493 00:25:37.493 test: (groupid=0, jobs=1): err= 0: pid=464291: Thu Dec 5 20:45:30 2024 00:25:37.493 read: IOPS=12.0k, BW=188MiB/s (197MB/s)(376MiB/2005msec) 00:25:37.493 slat (nsec): min=2276, max=76566, avg=2585.84, stdev=1156.41 00:25:37.493 clat (usec): min=1508, max=11829, avg=6146.26, stdev=1404.15 00:25:37.493 lat (usec): min=1510, max=11831, avg=6148.84, stdev=1404.23 00:25:37.493 clat percentiles (usec): 00:25:37.493 | 1.00th=[ 3425], 5.00th=[ 4015], 10.00th=[ 4424], 20.00th=[ 4883], 00:25:37.493 | 30.00th=[ 5276], 40.00th=[ 5669], 50.00th=[ 6128], 60.00th=[ 6587], 00:25:37.493 | 70.00th=[ 6915], 80.00th=[ 7308], 90.00th=[ 7767], 95.00th=[ 8356], 00:25:37.493 | 99.00th=[10159], 99.50th=[10683], 99.90th=[11469], 99.95th=[11600], 00:25:37.493 | 99.99th=[11731] 00:25:37.493 bw ( KiB/s): min=93120, max=100160, per=50.15%, avg=96408.00, stdev=2897.18, samples=4 00:25:37.493 iops : min= 5820, max= 6260, avg=6025.50, stdev=181.07, samples=4 00:25:37.493 write: IOPS=7150, BW=112MiB/s (117MB/s)(196MiB/1756msec); 0 zone resets 00:25:37.493 slat (usec): min=26, max=254, avg=28.94, stdev= 5.02 00:25:37.493 clat (usec): min=3671, max=13526, avg=7799.68, stdev=1271.99 00:25:37.493 lat (usec): min=3699, max=13553, avg=7828.62, stdev=1272.64 00:25:37.493 clat percentiles (usec): 00:25:37.493 | 1.00th=[ 5342], 5.00th=[ 5997], 10.00th=[ 6325], 20.00th=[ 6718], 00:25:37.493 | 30.00th=[ 7046], 40.00th=[ 7373], 50.00th=[ 7635], 60.00th=[ 7963], 00:25:37.493 | 70.00th=[ 8356], 80.00th=[ 8848], 90.00th=[ 9503], 95.00th=[10159], 00:25:37.493 | 99.00th=[11076], 99.50th=[11469], 99.90th=[13042], 99.95th=[13173], 00:25:37.493 | 99.99th=[13435] 00:25:37.493 bw ( KiB/s): min=97120, max=104640, per=87.80%, avg=100456.00, stdev=3119.78, samples=4 00:25:37.493 iops : min= 6070, max= 6540, avg=6278.50, stdev=194.99, samples=4 00:25:37.493 lat (msec) : 2=0.06%, 4=3.08%, 10=93.88%, 20=2.98% 00:25:37.493 cpu : usr=83.93%, sys=14.47%, ctx=107, majf=0, minf=2 00:25:37.493 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:37.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:37.493 issued rwts: total=24089,12557,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.493 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.493 00:25:37.493 Run status group 0 (all jobs): 00:25:37.493 READ: bw=188MiB/s (197MB/s), 188MiB/s-188MiB/s (197MB/s-197MB/s), io=376MiB (395MB), run=2005-2005msec 00:25:37.493 WRITE: bw=112MiB/s (117MB/s), 112MiB/s-112MiB/s (117MB/s-117MB/s), io=196MiB (206MB), run=1756-1756msec 00:25:37.493 20:45:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:37.752 20:45:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:37.752 20:45:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:37.752 20:45:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:37.752 20:45:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:37.752 20:45:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:37.752 20:45:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:25:37.752 20:45:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:37.752 20:45:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:25:37.752 20:45:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:37.752 20:45:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:37.752 rmmod nvme_tcp 00:25:37.752 rmmod nvme_fabrics 00:25:37.752 rmmod nvme_keyring 00:25:37.752 20:45:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:37.752 20:45:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:25:37.752 20:45:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:25:37.752 20:45:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 463097 ']' 00:25:37.752 20:45:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 463097 00:25:37.752 20:45:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 463097 ']' 00:25:37.752 20:45:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 463097 00:25:37.752 20:45:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:25:37.752 20:45:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:37.752 20:45:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 463097 00:25:37.752 20:45:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:37.752 20:45:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:37.752 20:45:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 463097' 00:25:37.752 killing process with pid 463097 00:25:37.752 20:45:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 463097 00:25:37.752 20:45:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 463097 00:25:38.011 20:45:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:38.011 20:45:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:38.011 20:45:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:38.011 20:45:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:25:38.011 20:45:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:25:38.011 20:45:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:38.011 20:45:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:38.011 20:45:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:38.011 20:45:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:38.011 20:45:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.011 20:45:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:38.011 20:45:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.915 20:45:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:39.915 00:25:39.915 real 0m16.382s 00:25:39.915 user 0m53.819s 00:25:39.915 sys 0m6.575s 00:25:39.915 20:45:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:39.915 20:45:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.915 ************************************ 00:25:39.915 END TEST nvmf_fio_host 00:25:39.915 ************************************ 00:25:40.174 20:45:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:40.174 20:45:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:40.174 20:45:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:40.174 20:45:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.174 ************************************ 00:25:40.174 START TEST nvmf_failover 00:25:40.174 ************************************ 00:25:40.174 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:40.174 * Looking for test storage... 00:25:40.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:40.174 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:40.174 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:25:40.174 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:40.174 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:40.174 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:40.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.175 --rc genhtml_branch_coverage=1 00:25:40.175 --rc genhtml_function_coverage=1 00:25:40.175 --rc genhtml_legend=1 00:25:40.175 --rc geninfo_all_blocks=1 00:25:40.175 --rc geninfo_unexecuted_blocks=1 00:25:40.175 00:25:40.175 ' 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:40.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.175 --rc genhtml_branch_coverage=1 00:25:40.175 --rc genhtml_function_coverage=1 00:25:40.175 --rc genhtml_legend=1 00:25:40.175 --rc geninfo_all_blocks=1 00:25:40.175 --rc geninfo_unexecuted_blocks=1 00:25:40.175 00:25:40.175 ' 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:40.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.175 --rc genhtml_branch_coverage=1 00:25:40.175 --rc genhtml_function_coverage=1 00:25:40.175 --rc genhtml_legend=1 00:25:40.175 --rc geninfo_all_blocks=1 00:25:40.175 --rc geninfo_unexecuted_blocks=1 00:25:40.175 00:25:40.175 ' 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:40.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.175 --rc genhtml_branch_coverage=1 00:25:40.175 --rc genhtml_function_coverage=1 00:25:40.175 --rc genhtml_legend=1 00:25:40.175 --rc geninfo_all_blocks=1 00:25:40.175 --rc geninfo_unexecuted_blocks=1 00:25:40.175 00:25:40.175 ' 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:40.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:40.175 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:40.176 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:40.176 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:40.176 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:40.176 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:40.176 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:40.176 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:40.176 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:40.176 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:40.176 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:40.176 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:40.435 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:40.435 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:40.435 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.435 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:40.435 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.435 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:40.435 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:40.435 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:25:40.435 20:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:47.003 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:47.003 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:47.003 Found net devices under 0000:af:00.0: cvl_0_0 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:47.003 Found net devices under 0000:af:00.1: cvl_0_1 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:47.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:47.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:25:47.003 00:25:47.003 --- 10.0.0.2 ping statistics --- 00:25:47.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.003 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:47.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:47.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:25:47.003 00:25:47.003 --- 10.0.0.1 ping statistics --- 00:25:47.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.003 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=468423 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 468423 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 468423 ']' 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:47.003 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:47.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:47.004 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:47.004 20:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:47.004 [2024-12-05 20:45:39.612803] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:25:47.004 [2024-12-05 20:45:39.612850] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:47.004 [2024-12-05 20:45:39.692385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:47.004 [2024-12-05 20:45:39.730045] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:47.004 [2024-12-05 20:45:39.730084] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:47.004 [2024-12-05 20:45:39.730090] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:47.004 [2024-12-05 20:45:39.730096] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:47.004 [2024-12-05 20:45:39.730101] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:47.004 [2024-12-05 20:45:39.731462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:47.004 [2024-12-05 20:45:39.731550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.004 [2024-12-05 20:45:39.731551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:47.004 20:45:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:47.004 20:45:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:47.004 20:45:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:47.004 20:45:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:47.004 20:45:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:47.261 20:45:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:47.261 20:45:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:47.261 [2024-12-05 20:45:40.621572] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:47.261 20:45:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:47.520 Malloc0 00:25:47.520 20:45:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:47.778 20:45:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:48.036 20:45:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:48.036 [2024-12-05 20:45:41.393654] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:48.036 20:45:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:48.294 [2024-12-05 20:45:41.574129] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:48.294 20:45:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:48.553 [2024-12-05 20:45:41.746685] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:48.554 20:45:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=468721 00:25:48.554 20:45:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:48.554 20:45:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:48.554 20:45:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 468721 /var/tmp/bdevperf.sock 00:25:48.554 20:45:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 468721 ']' 00:25:48.554 20:45:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:48.554 20:45:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:48.554 20:45:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:48.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:48.554 20:45:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:48.554 20:45:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:48.812 20:45:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:48.812 20:45:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:48.812 20:45:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:49.070 NVMe0n1 00:25:49.070 20:45:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:49.329 00:25:49.329 20:45:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=468982 00:25:49.329 20:45:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:49.329 20:45:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:50.286 20:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:50.546 [2024-12-05 20:45:43.885338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.546 [2024-12-05 20:45:43.885718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.547 [2024-12-05 20:45:43.885723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.547 [2024-12-05 20:45:43.885729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.547 [2024-12-05 20:45:43.885734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.547 [2024-12-05 20:45:43.885739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.547 [2024-12-05 20:45:43.885744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.547 [2024-12-05 20:45:43.885750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.547 [2024-12-05 20:45:43.885757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.547 [2024-12-05 20:45:43.885762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.547 [2024-12-05 20:45:43.885770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.547 [2024-12-05 20:45:43.885776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.547 [2024-12-05 20:45:43.885781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.547 [2024-12-05 20:45:43.885787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.547 [2024-12-05 20:45:43.885792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.547 [2024-12-05 20:45:43.885798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.547 [2024-12-05 20:45:43.885803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.547 [2024-12-05 20:45:43.885808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.547 [2024-12-05 20:45:43.885813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.547 [2024-12-05 20:45:43.885819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.547 [2024-12-05 20:45:43.885824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.547 [2024-12-05 20:45:43.885829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.547 [2024-12-05 20:45:43.885834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.547 [2024-12-05 20:45:43.885841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.547 [2024-12-05 20:45:43.885846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.547 [2024-12-05 20:45:43.885852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.547 [2024-12-05 20:45:43.885857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.547 [2024-12-05 20:45:43.885863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.547 [2024-12-05 20:45:43.885868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840be0 is same with the state(6) to be set 00:25:50.547 20:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:53.831 20:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:53.831 00:25:53.831 20:45:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:54.092 [2024-12-05 20:45:47.385464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1841970 is same with the state(6) to be set 00:25:54.092 [2024-12-05 20:45:47.385508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1841970 is same with the state(6) to be set 00:25:54.092 [2024-12-05 20:45:47.385521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1841970 is same with the state(6) to be set 00:25:54.092 [2024-12-05 20:45:47.385527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1841970 is same with the state(6) to be set 00:25:54.092 [2024-12-05 20:45:47.385532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1841970 is same with the state(6) to be set 00:25:54.092 [2024-12-05 20:45:47.385538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1841970 is same with the state(6) to be set 00:25:54.092 [2024-12-05 20:45:47.385543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1841970 is same with the state(6) to be set 00:25:54.092 [2024-12-05 20:45:47.385549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1841970 is same with the state(6) to be set 00:25:54.092 [2024-12-05 20:45:47.385554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1841970 is same with the state(6) to be set 00:25:54.092 [2024-12-05 20:45:47.385559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1841970 is same with the state(6) to be set 00:25:54.092 [2024-12-05 20:45:47.385564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1841970 is same with the state(6) to be set 00:25:54.092 [2024-12-05 20:45:47.385569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1841970 is same with the state(6) to be set 00:25:54.092 [2024-12-05 20:45:47.385575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1841970 is same with the state(6) to be set 00:25:54.092 [2024-12-05 20:45:47.385580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1841970 is same with the state(6) to be set 00:25:54.092 [2024-12-05 20:45:47.385586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1841970 is same with the state(6) to be set 00:25:54.092 [2024-12-05 20:45:47.385592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1841970 is same with the state(6) to be set 00:25:54.092 [2024-12-05 20:45:47.385596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1841970 is same with the state(6) to be set 00:25:54.092 [2024-12-05 20:45:47.385602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1841970 is same with the state(6) to be set 00:25:54.092 [2024-12-05 20:45:47.385607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1841970 is same with the state(6) to be set 00:25:54.092 [2024-12-05 20:45:47.385612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1841970 is same with the state(6) to be set 00:25:54.092 [2024-12-05 20:45:47.385618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1841970 is same with the state(6) to be set 00:25:54.092 [2024-12-05 20:45:47.385623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1841970 is same with the state(6) to be set 00:25:54.092 [2024-12-05 20:45:47.385628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1841970 is same with the state(6) to be set 00:25:54.092 [2024-12-05 20:45:47.385634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1841970 is same with the state(6) to be set 00:25:54.092 [2024-12-05 20:45:47.385640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1841970 is same with the state(6) to be set 00:25:54.092 [2024-12-05 20:45:47.385645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1841970 is same with the state(6) to be set 00:25:54.092 20:45:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:57.383 20:45:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:57.383 [2024-12-05 20:45:50.590376] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:57.383 20:45:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:58.319 20:45:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:58.579 [2024-12-05 20:45:51.805883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.805926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.805932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.805938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.805944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.805950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.805955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.805961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.805966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.805971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.805976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.805982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.805987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.805992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.805997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.806003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.806008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.806013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.806018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.806024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.806029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.806034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.806039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.806044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.806050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.806055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.806076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.806082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.806087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.806093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.806098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.806104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.806109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.806115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.806121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.806127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.806132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.806138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.806143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 [2024-12-05 20:45:51.806148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dab0 is same with the state(6) to be set 00:25:58.579 20:45:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 468982 00:26:05.167 { 00:26:05.167 "results": [ 00:26:05.167 { 00:26:05.167 "job": "NVMe0n1", 00:26:05.167 "core_mask": "0x1", 00:26:05.167 "workload": "verify", 00:26:05.167 "status": "finished", 00:26:05.167 "verify_range": { 00:26:05.167 "start": 0, 00:26:05.167 "length": 16384 00:26:05.167 }, 00:26:05.167 "queue_depth": 128, 00:26:05.167 "io_size": 4096, 00:26:05.167 "runtime": 15.002387, 00:26:05.167 "iops": 12260.182329651941, 00:26:05.167 "mibps": 47.891337225202896, 00:26:05.167 "io_failed": 6132, 00:26:05.167 "io_timeout": 0, 00:26:05.167 "avg_latency_us": 10083.597805141235, 00:26:05.167 "min_latency_us": 400.2909090909091, 00:26:05.167 "max_latency_us": 28359.214545454546 00:26:05.167 } 00:26:05.167 ], 00:26:05.167 "core_count": 1 00:26:05.167 } 00:26:05.167 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 468721 00:26:05.167 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 468721 ']' 00:26:05.167 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 468721 00:26:05.167 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:05.167 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:05.167 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 468721 00:26:05.167 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:05.167 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:05.167 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 468721' 00:26:05.167 killing process with pid 468721 00:26:05.167 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 468721 00:26:05.167 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 468721 00:26:05.167 20:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:05.167 [2024-12-05 20:45:41.807041] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:26:05.167 [2024-12-05 20:45:41.807105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid468721 ] 00:26:05.167 [2024-12-05 20:45:41.882597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.167 [2024-12-05 20:45:41.921032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.167 Running I/O for 15 seconds... 00:26:05.167 12400.00 IOPS, 48.44 MiB/s [2024-12-05T19:45:58.608Z] [2024-12-05 20:45:43.887259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:108016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.167 [2024-12-05 20:45:43.887290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.167 [2024-12-05 20:45:43.887305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:108024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.167 [2024-12-05 20:45:43.887312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.167 [2024-12-05 20:45:43.887320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:108032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.167 [2024-12-05 20:45:43.887327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.167 [2024-12-05 20:45:43.887335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:108040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.167 [2024-12-05 20:45:43.887344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.167 [2024-12-05 20:45:43.887351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:108048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.167 [2024-12-05 20:45:43.887358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.167 [2024-12-05 20:45:43.887365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:108056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.167 [2024-12-05 20:45:43.887372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.167 [2024-12-05 20:45:43.887380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:108064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.167 [2024-12-05 20:45:43.887386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.167 [2024-12-05 20:45:43.887393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.167 [2024-12-05 20:45:43.887399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.167 [2024-12-05 20:45:43.887407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:108080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.167 [2024-12-05 20:45:43.887413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.167 [2024-12-05 20:45:43.887420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:108088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.167 [2024-12-05 20:45:43.887425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.167 [2024-12-05 20:45:43.887433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:108096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.167 [2024-12-05 20:45:43.887438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.167 [2024-12-05 20:45:43.887451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:108104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.167 [2024-12-05 20:45:43.887457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.167 [2024-12-05 20:45:43.887465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:108112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.167 [2024-12-05 20:45:43.887470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.167 [2024-12-05 20:45:43.887478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:108120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.167 [2024-12-05 20:45:43.887483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.167 [2024-12-05 20:45:43.887491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:108128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.168 [2024-12-05 20:45:43.887496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.168 [2024-12-05 20:45:43.887504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:108136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.168 [2024-12-05 20:45:43.887510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.168 [2024-12-05 20:45:43.887517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:108144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.168 [2024-12-05 20:45:43.887523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.168 [2024-12-05 20:45:43.887531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.168 [2024-12-05 20:45:43.887537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.168 [2024-12-05 20:45:43.887544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:108160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.168 [2024-12-05 20:45:43.887550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.168 [2024-12-05 20:45:43.887558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:108168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.168 [2024-12-05 20:45:43.887564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.168 [2024-12-05 20:45:43.887571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:108176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.168 [2024-12-05 20:45:43.887577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.168 [2024-12-05 20:45:43.887584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:108184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.168 [2024-12-05 20:45:43.887590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.168 [2024-12-05 20:45:43.887598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.168 [2024-12-05 20:45:43.887604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.168 [2024-12-05 20:45:43.887612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:108512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.168 [2024-12-05 20:45:43.887620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.168 [2024-12-05 20:45:43.887628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:108520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.168 [2024-12-05 20:45:43.887635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.168 [2024-12-05 20:45:43.887642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:108192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.168 [2024-12-05 20:45:43.887649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.168 [2024-12-05 20:45:43.887658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.168 [2024-12-05 20:45:43.887665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.168 [2024-12-05 20:45:43.887672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:108208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.168 [2024-12-05 20:45:43.887679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.168 [2024-12-05 20:45:43.887686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:108216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.168 [2024-12-05 20:45:43.887692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.168 [2024-12-05 20:45:43.887700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:108224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.168 [2024-12-05 20:45:43.887706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.168 [2024-12-05 20:45:43.887714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:108232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.168 [2024-12-05 20:45:43.887720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.168 [2024-12-05 20:45:43.887728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:108240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.168 [2024-12-05 20:45:43.887734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.168 [2024-12-05 20:45:43.887741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:108248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.168 [2024-12-05 20:45:43.887748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.168 [2024-12-05 20:45:43.887755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:108256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.168 [2024-12-05 20:45:43.887762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.168 [2024-12-05 20:45:43.887769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:108264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.168 [2024-12-05 20:45:43.887775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.168 [2024-12-05 20:45:43.887782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:108272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.168 [2024-12-05 20:45:43.887788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.168 [2024-12-05 20:45:43.887800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:108280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.168 [2024-12-05 20:45:43.887806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.168 [2024-12-05 20:45:43.887814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:108288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.168 [2024-12-05 20:45:43.887820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.168 [2024-12-05 20:45:43.887827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:108296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.168 [2024-12-05 20:45:43.887833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.168 [2024-12-05 20:45:43.887841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.168 [2024-12-05 20:45:43.887847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.168 [2024-12-05 20:45:43.887854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.168 [2024-12-05 20:45:43.887860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.168 [2024-12-05 20:45:43.887867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:108320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.168 [2024-12-05 20:45:43.887873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.169 [2024-12-05 20:45:43.887880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:108328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.169 [2024-12-05 20:45:43.887886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.169 [2024-12-05 20:45:43.887893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:108336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.169 [2024-12-05 20:45:43.887899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.169 [2024-12-05 20:45:43.887906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.169 [2024-12-05 20:45:43.887913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.169 [2024-12-05 20:45:43.887920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:108352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.169 [2024-12-05 20:45:43.887926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.169 [2024-12-05 20:45:43.887933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.169 [2024-12-05 20:45:43.887939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.169 [2024-12-05 20:45:43.887946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.169 [2024-12-05 20:45:43.887952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.169 [2024-12-05 20:45:43.887959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:108376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.169 [2024-12-05 20:45:43.887971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.169 [2024-12-05 20:45:43.887979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:108384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.169 [2024-12-05 20:45:43.887984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.169 [2024-12-05 20:45:43.887992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.169 [2024-12-05 20:45:43.887997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.169 [2024-12-05 20:45:43.888004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.169 [2024-12-05 20:45:43.888010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.169 [2024-12-05 20:45:43.888018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:108408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.169 [2024-12-05 20:45:43.888023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.169 [2024-12-05 20:45:43.888030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:108416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.169 [2024-12-05 20:45:43.888036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.169 [2024-12-05 20:45:43.888043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:108424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.169 [2024-12-05 20:45:43.888049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.169 [2024-12-05 20:45:43.888056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:108432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.169 [2024-12-05 20:45:43.888067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.169 [2024-12-05 20:45:43.888075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.169 [2024-12-05 20:45:43.888081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.169 [2024-12-05 20:45:43.888088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:108448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.169 [2024-12-05 20:45:43.888094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.169 [2024-12-05 20:45:43.888101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:108456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.169 [2024-12-05 20:45:43.888107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.169 [2024-12-05 20:45:43.888114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:108464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.169 [2024-12-05 20:45:43.888120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.169 [2024-12-05 20:45:43.888128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:108472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.169 [2024-12-05 20:45:43.888133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.169 [2024-12-05 20:45:43.888142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:108480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.169 [2024-12-05 20:45:43.888148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.169 [2024-12-05 20:45:43.888155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:108488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.169 [2024-12-05 20:45:43.888161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.169 [2024-12-05 20:45:43.888168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:108496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.169 [2024-12-05 20:45:43.888174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.169 [2024-12-05 20:45:43.888181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.169 [2024-12-05 20:45:43.888188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.169 [2024-12-05 20:45:43.888195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.169 [2024-12-05 20:45:43.888201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.169 [2024-12-05 20:45:43.888208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:108544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.169 [2024-12-05 20:45:43.888213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.169 [2024-12-05 20:45:43.888221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.169 [2024-12-05 20:45:43.888227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.169 [2024-12-05 20:45:43.888235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.169 [2024-12-05 20:45:43.888241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.169 [2024-12-05 20:45:43.888248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.169 [2024-12-05 20:45:43.888253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.169 [2024-12-05 20:45:43.888261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.170 [2024-12-05 20:45:43.888266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.170 [2024-12-05 20:45:43.888273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:108584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.170 [2024-12-05 20:45:43.888279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.170 [2024-12-05 20:45:43.888286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:108592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.170 [2024-12-05 20:45:43.888293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.170 [2024-12-05 20:45:43.888301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:108600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.170 [2024-12-05 20:45:43.888308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.170 [2024-12-05 20:45:43.888315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.170 [2024-12-05 20:45:43.888320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.170 [2024-12-05 20:45:43.888327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:108616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.170 [2024-12-05 20:45:43.888333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.170 [2024-12-05 20:45:43.888340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:108624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.170 [2024-12-05 20:45:43.888346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.170 [2024-12-05 20:45:43.888352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:108632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.170 [2024-12-05 20:45:43.888358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.170 [2024-12-05 20:45:43.888365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:108640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.170 [2024-12-05 20:45:43.888371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.170 [2024-12-05 20:45:43.888378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.170 [2024-12-05 20:45:43.888384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.170 [2024-12-05 20:45:43.888391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:108656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.170 [2024-12-05 20:45:43.888398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.170 [2024-12-05 20:45:43.888405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.170 [2024-12-05 20:45:43.888411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.170 [2024-12-05 20:45:43.888418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:108672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.170 [2024-12-05 20:45:43.888424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.170 [2024-12-05 20:45:43.888431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:108680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.170 [2024-12-05 20:45:43.888437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.170 [2024-12-05 20:45:43.888445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.170 [2024-12-05 20:45:43.888451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.170 [2024-12-05 20:45:43.888459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.170 [2024-12-05 20:45:43.888466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.170 [2024-12-05 20:45:43.888474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:108704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.170 [2024-12-05 20:45:43.888481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.170 [2024-12-05 20:45:43.888488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.170 [2024-12-05 20:45:43.888494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.170 [2024-12-05 20:45:43.888501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:108720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.170 [2024-12-05 20:45:43.888507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.170 [2024-12-05 20:45:43.888514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:108728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.170 [2024-12-05 20:45:43.888519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.170 [2024-12-05 20:45:43.888526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:108736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.170 [2024-12-05 20:45:43.888532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.170 [2024-12-05 20:45:43.888539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:108744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.170 [2024-12-05 20:45:43.888545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.170 [2024-12-05 20:45:43.888552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:108752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.170 [2024-12-05 20:45:43.888557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.170 [2024-12-05 20:45:43.888565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.170 [2024-12-05 20:45:43.888570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.170 [2024-12-05 20:45:43.888577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:108768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.170 [2024-12-05 20:45:43.888583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.170 [2024-12-05 20:45:43.888590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:108776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.170 [2024-12-05 20:45:43.888596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.170 [2024-12-05 20:45:43.888603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.171 [2024-12-05 20:45:43.888610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.171 [2024-12-05 20:45:43.888617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:108792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.171 [2024-12-05 20:45:43.888622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.171 [2024-12-05 20:45:43.888630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:108800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.171 [2024-12-05 20:45:43.888635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.171 [2024-12-05 20:45:43.888644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:108808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.171 [2024-12-05 20:45:43.888649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.171 [2024-12-05 20:45:43.888658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.171 [2024-12-05 20:45:43.888664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.171 [2024-12-05 20:45:43.888671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:108824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.171 [2024-12-05 20:45:43.888676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.171 [2024-12-05 20:45:43.888683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.171 [2024-12-05 20:45:43.888689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.171 [2024-12-05 20:45:43.888696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.171 [2024-12-05 20:45:43.888702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.171 [2024-12-05 20:45:43.888709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.171 [2024-12-05 20:45:43.888714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.171 [2024-12-05 20:45:43.888721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.171 [2024-12-05 20:45:43.888728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.171 [2024-12-05 20:45:43.888735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:108864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.171 [2024-12-05 20:45:43.888740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.171 [2024-12-05 20:45:43.888747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.171 [2024-12-05 20:45:43.888753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.171 [2024-12-05 20:45:43.888760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:108880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.171 [2024-12-05 20:45:43.888766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.171 [2024-12-05 20:45:43.888773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:108888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.171 [2024-12-05 20:45:43.888778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.171 [2024-12-05 20:45:43.888786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.171 [2024-12-05 20:45:43.888791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.171 [2024-12-05 20:45:43.888798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:108904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.171 [2024-12-05 20:45:43.888805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.171 [2024-12-05 20:45:43.888812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.171 [2024-12-05 20:45:43.888819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.171 [2024-12-05 20:45:43.888826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:108920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.171 [2024-12-05 20:45:43.888832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.171 [2024-12-05 20:45:43.888839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.171 [2024-12-05 20:45:43.888845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.171 [2024-12-05 20:45:43.888852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:108936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.171 [2024-12-05 20:45:43.888858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.171 [2024-12-05 20:45:43.888869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:108944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.171 [2024-12-05 20:45:43.888875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.171 [2024-12-05 20:45:43.888883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.171 [2024-12-05 20:45:43.888889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.171 [2024-12-05 20:45:43.888896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:108960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.171 [2024-12-05 20:45:43.888902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.171 [2024-12-05 20:45:43.888909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.171 [2024-12-05 20:45:43.888915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.171 [2024-12-05 20:45:43.888922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:108976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.171 [2024-12-05 20:45:43.888928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.171 [2024-12-05 20:45:43.888935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.171 [2024-12-05 20:45:43.888941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.171 [2024-12-05 20:45:43.888948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:108992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.171 [2024-12-05 20:45:43.888954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.171 [2024-12-05 20:45:43.888961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:109000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.171 [2024-12-05 20:45:43.888967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.172 [2024-12-05 20:45:43.888975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:109008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.172 [2024-12-05 20:45:43.888981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.172 [2024-12-05 20:45:43.888988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:109016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.172 [2024-12-05 20:45:43.888994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.172 [2024-12-05 20:45:43.889001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:109024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.172 [2024-12-05 20:45:43.889006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.172 [2024-12-05 20:45:43.889025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.172 [2024-12-05 20:45:43.889030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.172 [2024-12-05 20:45:43.889035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109032 len:8 PRP1 0x0 PRP2 0x0 00:26:05.172 [2024-12-05 20:45:43.889043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.172 [2024-12-05 20:45:43.889091] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:05.172 [2024-12-05 20:45:43.889113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.172 [2024-12-05 20:45:43.889120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.172 [2024-12-05 20:45:43.889127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.172 [2024-12-05 20:45:43.889133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.172 [2024-12-05 20:45:43.889139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.172 [2024-12-05 20:45:43.889148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.172 [2024-12-05 20:45:43.889155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.172 [2024-12-05 20:45:43.889163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.172 [2024-12-05 20:45:43.889170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:05.172 [2024-12-05 20:45:43.889195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20eb480 (9): Bad file descriptor 00:26:05.172 [2024-12-05 20:45:43.891741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:05.172 [2024-12-05 20:45:43.922953] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:05.172 12082.50 IOPS, 47.20 MiB/s [2024-12-05T19:45:58.613Z] 12206.67 IOPS, 47.68 MiB/s [2024-12-05T19:45:58.613Z] 12270.75 IOPS, 47.93 MiB/s [2024-12-05T19:45:58.613Z] [2024-12-05 20:45:47.386857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.172 [2024-12-05 20:45:47.386891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.172 [2024-12-05 20:45:47.386904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.172 [2024-12-05 20:45:47.386915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.172 [2024-12-05 20:45:47.386923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.172 [2024-12-05 20:45:47.386930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.172 [2024-12-05 20:45:47.386937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.172 [2024-12-05 20:45:47.386943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.172 [2024-12-05 20:45:47.386950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.172 [2024-12-05 20:45:47.386956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.172 [2024-12-05 20:45:47.386964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.172 [2024-12-05 20:45:47.386970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.172 [2024-12-05 20:45:47.386977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.172 [2024-12-05 20:45:47.386983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.172 [2024-12-05 20:45:47.386990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.172 [2024-12-05 20:45:47.386996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.172 [2024-12-05 20:45:47.387003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.172 [2024-12-05 20:45:47.387009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.172 [2024-12-05 20:45:47.387016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.172 [2024-12-05 20:45:47.387022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.172 [2024-12-05 20:45:47.387029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.172 [2024-12-05 20:45:47.387035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.172 [2024-12-05 20:45:47.387042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.172 [2024-12-05 20:45:47.387048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.172 [2024-12-05 20:45:47.387055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.172 [2024-12-05 20:45:47.387067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.172 [2024-12-05 20:45:47.387075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.172 [2024-12-05 20:45:47.387081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.172 [2024-12-05 20:45:47.387090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.172 [2024-12-05 20:45:47.387096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.172 [2024-12-05 20:45:47.387103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.173 [2024-12-05 20:45:47.387109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.173 [2024-12-05 20:45:47.387116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.173 [2024-12-05 20:45:47.387124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.173 [2024-12-05 20:45:47.387131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.173 [2024-12-05 20:45:47.387137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.173 [2024-12-05 20:45:47.387144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.173 [2024-12-05 20:45:47.387150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.173 [2024-12-05 20:45:47.387157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.173 [2024-12-05 20:45:47.387162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.173 [2024-12-05 20:45:47.387170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.173 [2024-12-05 20:45:47.387175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.173 [2024-12-05 20:45:47.387183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.173 [2024-12-05 20:45:47.387189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.173 [2024-12-05 20:45:47.387196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.173 [2024-12-05 20:45:47.387202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.173 [2024-12-05 20:45:47.387209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.173 [2024-12-05 20:45:47.387215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.173 [2024-12-05 20:45:47.387222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.173 [2024-12-05 20:45:47.387228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.173 [2024-12-05 20:45:47.387235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.173 [2024-12-05 20:45:47.387241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.173 [2024-12-05 20:45:47.387248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.173 [2024-12-05 20:45:47.387253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.173 [2024-12-05 20:45:47.387262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.173 [2024-12-05 20:45:47.387269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.173 [2024-12-05 20:45:47.387276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.173 [2024-12-05 20:45:47.387282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.173 [2024-12-05 20:45:47.387289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.173 [2024-12-05 20:45:47.387295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.173 [2024-12-05 20:45:47.387302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.173 [2024-12-05 20:45:47.387307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.173 [2024-12-05 20:45:47.387315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.173 [2024-12-05 20:45:47.387320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.173 [2024-12-05 20:45:47.387328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.174 [2024-12-05 20:45:47.387334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.174 [2024-12-05 20:45:47.387341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.174 [2024-12-05 20:45:47.387347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.174 [2024-12-05 20:45:47.387354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.174 [2024-12-05 20:45:47.387360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.174 [2024-12-05 20:45:47.387367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.174 [2024-12-05 20:45:47.387372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.174 [2024-12-05 20:45:47.387379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.174 [2024-12-05 20:45:47.387385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.174 [2024-12-05 20:45:47.387392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.174 [2024-12-05 20:45:47.387398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.174 [2024-12-05 20:45:47.387405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.174 [2024-12-05 20:45:47.387411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.174 [2024-12-05 20:45:47.387418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.174 [2024-12-05 20:45:47.387425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.174 [2024-12-05 20:45:47.387432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.174 [2024-12-05 20:45:47.387438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.174 [2024-12-05 20:45:47.387445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.174 [2024-12-05 20:45:47.387452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.174 [2024-12-05 20:45:47.387459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.174 [2024-12-05 20:45:47.387465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.174 [2024-12-05 20:45:47.387472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.174 [2024-12-05 20:45:47.387478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.174 [2024-12-05 20:45:47.387485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.174 [2024-12-05 20:45:47.387490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.174 [2024-12-05 20:45:47.387498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.174 [2024-12-05 20:45:47.387503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.174 [2024-12-05 20:45:47.387510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.174 [2024-12-05 20:45:47.387516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.174 [2024-12-05 20:45:47.387523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.174 [2024-12-05 20:45:47.387529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.174 [2024-12-05 20:45:47.387536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.174 [2024-12-05 20:45:47.387542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.174 [2024-12-05 20:45:47.387550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.174 [2024-12-05 20:45:47.387555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.174 [2024-12-05 20:45:47.387563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.174 [2024-12-05 20:45:47.387568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.174 [2024-12-05 20:45:47.387576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.174 [2024-12-05 20:45:47.387581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.174 [2024-12-05 20:45:47.387590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.174 [2024-12-05 20:45:47.387596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.174 [2024-12-05 20:45:47.387603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.174 [2024-12-05 20:45:47.387609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.174 [2024-12-05 20:45:47.387615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.174 [2024-12-05 20:45:47.387621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.174 [2024-12-05 20:45:47.387629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.174 [2024-12-05 20:45:47.387635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.174 [2024-12-05 20:45:47.387642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.174 [2024-12-05 20:45:47.387647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.174 [2024-12-05 20:45:47.387654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.174 [2024-12-05 20:45:47.387660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.174 [2024-12-05 20:45:47.387668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.175 [2024-12-05 20:45:47.387674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.175 [2024-12-05 20:45:47.387681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.175 [2024-12-05 20:45:47.387687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.175 [2024-12-05 20:45:47.387695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.175 [2024-12-05 20:45:47.387700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.175 [2024-12-05 20:45:47.387708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.175 [2024-12-05 20:45:47.387713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.175 [2024-12-05 20:45:47.387720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:71672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.175 [2024-12-05 20:45:47.387726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.175 [2024-12-05 20:45:47.387733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:71680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.175 [2024-12-05 20:45:47.387739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.175 [2024-12-05 20:45:47.387746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.175 [2024-12-05 20:45:47.387753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.175 [2024-12-05 20:45:47.387771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.175 [2024-12-05 20:45:47.387777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72456 len:8 PRP1 0x0 PRP2 0x0 00:26:05.175 [2024-12-05 20:45:47.387783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.175 [2024-12-05 20:45:47.387817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.175 [2024-12-05 20:45:47.387824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.175 [2024-12-05 20:45:47.387831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.175 [2024-12-05 20:45:47.387837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.175 [2024-12-05 20:45:47.387844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.175 [2024-12-05 20:45:47.387850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.175 [2024-12-05 20:45:47.387856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.175 [2024-12-05 20:45:47.387862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.175 [2024-12-05 20:45:47.387868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb480 is same with the state(6) to be set 00:26:05.175 [2024-12-05 20:45:47.387975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.175 [2024-12-05 20:45:47.387981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.175 [2024-12-05 20:45:47.387986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72464 len:8 PRP1 0x0 PRP2 0x0 00:26:05.175 [2024-12-05 20:45:47.387992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.175 [2024-12-05 20:45:47.387999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.175 [2024-12-05 20:45:47.388004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.175 [2024-12-05 20:45:47.388009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72472 len:8 PRP1 0x0 PRP2 0x0 00:26:05.175 [2024-12-05 20:45:47.388015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.175 [2024-12-05 20:45:47.388021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.175 [2024-12-05 20:45:47.388026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.175 [2024-12-05 20:45:47.388030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72480 len:8 PRP1 0x0 PRP2 0x0 00:26:05.175 [2024-12-05 20:45:47.388036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.175 [2024-12-05 20:45:47.388041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.175 [2024-12-05 20:45:47.388046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.175 [2024-12-05 20:45:47.388051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72488 len:8 PRP1 0x0 PRP2 0x0 00:26:05.175 [2024-12-05 20:45:47.388056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.175 [2024-12-05 20:45:47.388069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.175 [2024-12-05 20:45:47.388074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.175 [2024-12-05 20:45:47.388079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72496 len:8 PRP1 0x0 PRP2 0x0 00:26:05.175 [2024-12-05 20:45:47.388085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.175 [2024-12-05 20:45:47.388091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.175 [2024-12-05 20:45:47.388096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.175 [2024-12-05 20:45:47.388101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72504 len:8 PRP1 0x0 PRP2 0x0 00:26:05.175 [2024-12-05 20:45:47.388106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.175 [2024-12-05 20:45:47.388112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.175 [2024-12-05 20:45:47.388117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.175 [2024-12-05 20:45:47.388121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72512 len:8 PRP1 0x0 PRP2 0x0 00:26:05.175 [2024-12-05 20:45:47.388129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.175 [2024-12-05 20:45:47.388135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.175 [2024-12-05 20:45:47.388139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.175 [2024-12-05 20:45:47.388144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72520 len:8 PRP1 0x0 PRP2 0x0 00:26:05.175 [2024-12-05 20:45:47.388150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.175 [2024-12-05 20:45:47.388156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.176 [2024-12-05 20:45:47.388160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.176 [2024-12-05 20:45:47.388165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72528 len:8 PRP1 0x0 PRP2 0x0 00:26:05.176 [2024-12-05 20:45:47.388170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.176 [2024-12-05 20:45:47.388176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.176 [2024-12-05 20:45:47.388181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.176 [2024-12-05 20:45:47.388186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72536 len:8 PRP1 0x0 PRP2 0x0 00:26:05.176 [2024-12-05 20:45:47.388191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.176 [2024-12-05 20:45:47.388197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.176 [2024-12-05 20:45:47.388202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.176 [2024-12-05 20:45:47.388206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72544 len:8 PRP1 0x0 PRP2 0x0 00:26:05.176 [2024-12-05 20:45:47.388212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.176 [2024-12-05 20:45:47.388218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.176 [2024-12-05 20:45:47.388222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.176 [2024-12-05 20:45:47.388226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72552 len:8 PRP1 0x0 PRP2 0x0 00:26:05.176 [2024-12-05 20:45:47.388233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.176 [2024-12-05 20:45:47.388239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.176 [2024-12-05 20:45:47.388244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.176 [2024-12-05 20:45:47.388249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72560 len:8 PRP1 0x0 PRP2 0x0 00:26:05.176 [2024-12-05 20:45:47.388256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.176 [2024-12-05 20:45:47.388262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.176 [2024-12-05 20:45:47.388266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.176 [2024-12-05 20:45:47.388271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72568 len:8 PRP1 0x0 PRP2 0x0 00:26:05.176 [2024-12-05 20:45:47.388277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.176 [2024-12-05 20:45:47.388283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.176 [2024-12-05 20:45:47.388287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.176 [2024-12-05 20:45:47.388292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72576 len:8 PRP1 0x0 PRP2 0x0 00:26:05.176 [2024-12-05 20:45:47.388298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.176 [2024-12-05 20:45:47.388304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.176 [2024-12-05 20:45:47.388309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.176 [2024-12-05 20:45:47.388314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72584 len:8 PRP1 0x0 PRP2 0x0 00:26:05.176 [2024-12-05 20:45:47.388320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.176 [2024-12-05 20:45:47.388325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.176 [2024-12-05 20:45:47.388330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.176 [2024-12-05 20:45:47.388335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72592 len:8 PRP1 0x0 PRP2 0x0 00:26:05.176 [2024-12-05 20:45:47.388340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.176 [2024-12-05 20:45:47.388347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.176 [2024-12-05 20:45:47.388351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.176 [2024-12-05 20:45:47.388356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72600 len:8 PRP1 0x0 PRP2 0x0 00:26:05.176 [2024-12-05 20:45:47.388361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.176 [2024-12-05 20:45:47.388368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.176 [2024-12-05 20:45:47.388372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.176 [2024-12-05 20:45:47.388377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72608 len:8 PRP1 0x0 PRP2 0x0 00:26:05.176 [2024-12-05 20:45:47.388383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.176 [2024-12-05 20:45:47.388388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.176 [2024-12-05 20:45:47.388393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.176 [2024-12-05 20:45:47.388399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72616 len:8 PRP1 0x0 PRP2 0x0 00:26:05.176 [2024-12-05 20:45:47.388405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.176 [2024-12-05 20:45:47.388411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.176 [2024-12-05 20:45:47.388416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.176 [2024-12-05 20:45:47.388420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72624 len:8 PRP1 0x0 PRP2 0x0 00:26:05.176 [2024-12-05 20:45:47.388427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.176 [2024-12-05 20:45:47.388433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.176 [2024-12-05 20:45:47.388437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.176 [2024-12-05 20:45:47.388442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72632 len:8 PRP1 0x0 PRP2 0x0 00:26:05.176 [2024-12-05 20:45:47.388449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.176 [2024-12-05 20:45:47.388454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.176 [2024-12-05 20:45:47.388459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.176 [2024-12-05 20:45:47.388463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72640 len:8 PRP1 0x0 PRP2 0x0 00:26:05.176 [2024-12-05 20:45:47.388469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.176 [2024-12-05 20:45:47.388475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.176 [2024-12-05 20:45:47.388479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.176 [2024-12-05 20:45:47.388484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72648 len:8 PRP1 0x0 PRP2 0x0 00:26:05.177 [2024-12-05 20:45:47.388489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.177 [2024-12-05 20:45:47.388496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.177 [2024-12-05 20:45:47.388500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.177 [2024-12-05 20:45:47.388505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72656 len:8 PRP1 0x0 PRP2 0x0 00:26:05.177 [2024-12-05 20:45:47.388511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.177 [2024-12-05 20:45:47.388517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.177 [2024-12-05 20:45:47.388521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.177 [2024-12-05 20:45:47.388526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72664 len:8 PRP1 0x0 PRP2 0x0 00:26:05.177 [2024-12-05 20:45:47.388531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.177 [2024-12-05 20:45:47.388538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.177 [2024-12-05 20:45:47.388542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.177 [2024-12-05 20:45:47.388547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72672 len:8 PRP1 0x0 PRP2 0x0 00:26:05.177 [2024-12-05 20:45:47.388553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.177 [2024-12-05 20:45:47.388559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.177 [2024-12-05 20:45:47.388567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.177 [2024-12-05 20:45:47.388571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72680 len:8 PRP1 0x0 PRP2 0x0 00:26:05.177 [2024-12-05 20:45:47.388577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.177 [2024-12-05 20:45:47.388583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.177 [2024-12-05 20:45:47.388587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.177 [2024-12-05 20:45:47.388592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71688 len:8 PRP1 0x0 PRP2 0x0 00:26:05.177 [2024-12-05 20:45:47.388599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.177 [2024-12-05 20:45:47.388605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.177 [2024-12-05 20:45:47.388609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.177 [2024-12-05 20:45:47.388614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71696 len:8 PRP1 0x0 PRP2 0x0 00:26:05.177 [2024-12-05 20:45:47.388620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.177 [2024-12-05 20:45:47.388625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.177 [2024-12-05 20:45:47.388630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.177 [2024-12-05 20:45:47.388634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71704 len:8 PRP1 0x0 PRP2 0x0 00:26:05.177 [2024-12-05 20:45:47.388640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.177 [2024-12-05 20:45:47.388646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.177 [2024-12-05 20:45:47.388651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.177 [2024-12-05 20:45:47.388656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71712 len:8 PRP1 0x0 PRP2 0x0 00:26:05.177 [2024-12-05 20:45:47.388661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.177 [2024-12-05 20:45:47.388667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.177 [2024-12-05 20:45:47.388671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.177 [2024-12-05 20:45:47.388676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71720 len:8 PRP1 0x0 PRP2 0x0 00:26:05.177 [2024-12-05 20:45:47.388682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.177 [2024-12-05 20:45:47.388688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.177 [2024-12-05 20:45:47.388692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.177 [2024-12-05 20:45:47.388697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71728 len:8 PRP1 0x0 PRP2 0x0 00:26:05.177 [2024-12-05 20:45:47.388702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.177 [2024-12-05 20:45:47.388709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.177 [2024-12-05 20:45:47.388714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.177 [2024-12-05 20:45:47.388719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71736 len:8 PRP1 0x0 PRP2 0x0 00:26:05.177 [2024-12-05 20:45:47.388724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.177 [2024-12-05 20:45:47.388731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.177 [2024-12-05 20:45:47.388736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.177 [2024-12-05 20:45:47.388741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71744 len:8 PRP1 0x0 PRP2 0x0 00:26:05.177 [2024-12-05 20:45:47.388746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.177 [2024-12-05 20:45:47.388752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.177 [2024-12-05 20:45:47.388757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.177 [2024-12-05 20:45:47.388762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71752 len:8 PRP1 0x0 PRP2 0x0 00:26:05.177 [2024-12-05 20:45:47.388768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.177 [2024-12-05 20:45:47.388774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.177 [2024-12-05 20:45:47.388779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.177 [2024-12-05 20:45:47.388784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71760 len:8 PRP1 0x0 PRP2 0x0 00:26:05.177 [2024-12-05 20:45:47.388789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.177 [2024-12-05 20:45:47.388795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.177 [2024-12-05 20:45:47.388799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.177 [2024-12-05 20:45:47.388804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71768 len:8 PRP1 0x0 PRP2 0x0 00:26:05.177 [2024-12-05 20:45:47.388810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.177 [2024-12-05 20:45:47.388816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.177 [2024-12-05 20:45:47.399644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.177 [2024-12-05 20:45:47.399657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71776 len:8 PRP1 0x0 PRP2 0x0 00:26:05.178 [2024-12-05 20:45:47.399667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.178 [2024-12-05 20:45:47.399676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.178 [2024-12-05 20:45:47.399682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.178 [2024-12-05 20:45:47.399689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71784 len:8 PRP1 0x0 PRP2 0x0 00:26:05.178 [2024-12-05 20:45:47.399696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.178 [2024-12-05 20:45:47.399705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.178 [2024-12-05 20:45:47.399711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.178 [2024-12-05 20:45:47.399718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71792 len:8 PRP1 0x0 PRP2 0x0 00:26:05.178 [2024-12-05 20:45:47.399726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.178 [2024-12-05 20:45:47.399735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.178 [2024-12-05 20:45:47.399741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.178 [2024-12-05 20:45:47.399748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71800 len:8 PRP1 0x0 PRP2 0x0 00:26:05.178 [2024-12-05 20:45:47.399758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.178 [2024-12-05 20:45:47.399766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.178 [2024-12-05 20:45:47.399772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.178 [2024-12-05 20:45:47.399779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72688 len:8 PRP1 0x0 PRP2 0x0 00:26:05.178 [2024-12-05 20:45:47.399786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.178 [2024-12-05 20:45:47.399795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.178 [2024-12-05 20:45:47.399801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.178 [2024-12-05 20:45:47.399807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71808 len:8 PRP1 0x0 PRP2 0x0 00:26:05.178 [2024-12-05 20:45:47.399816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.178 [2024-12-05 20:45:47.399824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.178 [2024-12-05 20:45:47.399830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.178 [2024-12-05 20:45:47.399837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71816 len:8 PRP1 0x0 PRP2 0x0 00:26:05.178 [2024-12-05 20:45:47.399844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.178 [2024-12-05 20:45:47.399852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.178 [2024-12-05 20:45:47.399858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.178 [2024-12-05 20:45:47.399865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71824 len:8 PRP1 0x0 PRP2 0x0 00:26:05.178 [2024-12-05 20:45:47.399872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.178 [2024-12-05 20:45:47.399880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.178 [2024-12-05 20:45:47.399886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.178 [2024-12-05 20:45:47.399893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71832 len:8 PRP1 0x0 PRP2 0x0 00:26:05.178 [2024-12-05 20:45:47.399900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.178 [2024-12-05 20:45:47.399908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.178 [2024-12-05 20:45:47.399914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.178 [2024-12-05 20:45:47.399921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71840 len:8 PRP1 0x0 PRP2 0x0 00:26:05.178 [2024-12-05 20:45:47.399928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.178 [2024-12-05 20:45:47.399936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.178 [2024-12-05 20:45:47.399943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.178 [2024-12-05 20:45:47.399950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71848 len:8 PRP1 0x0 PRP2 0x0 00:26:05.178 [2024-12-05 20:45:47.399958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.178 [2024-12-05 20:45:47.399966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.178 [2024-12-05 20:45:47.399972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.178 [2024-12-05 20:45:47.399981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71856 len:8 PRP1 0x0 PRP2 0x0 00:26:05.178 [2024-12-05 20:45:47.399989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.178 [2024-12-05 20:45:47.399997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.178 [2024-12-05 20:45:47.400003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.178 [2024-12-05 20:45:47.400010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71864 len:8 PRP1 0x0 PRP2 0x0 00:26:05.178 [2024-12-05 20:45:47.400017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.178 [2024-12-05 20:45:47.400026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.178 [2024-12-05 20:45:47.400032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.178 [2024-12-05 20:45:47.400038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71872 len:8 PRP1 0x0 PRP2 0x0 00:26:05.178 [2024-12-05 20:45:47.400046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.178 [2024-12-05 20:45:47.400054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.178 [2024-12-05 20:45:47.400065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.178 [2024-12-05 20:45:47.400072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71880 len:8 PRP1 0x0 PRP2 0x0 00:26:05.178 [2024-12-05 20:45:47.400079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.178 [2024-12-05 20:45:47.400087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.178 [2024-12-05 20:45:47.400094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.178 [2024-12-05 20:45:47.400102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71888 len:8 PRP1 0x0 PRP2 0x0 00:26:05.178 [2024-12-05 20:45:47.400109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.178 [2024-12-05 20:45:47.400117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.178 [2024-12-05 20:45:47.400123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.178 [2024-12-05 20:45:47.400130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71896 len:8 PRP1 0x0 PRP2 0x0 00:26:05.179 [2024-12-05 20:45:47.400138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.179 [2024-12-05 20:45:47.400146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.179 [2024-12-05 20:45:47.400152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.179 [2024-12-05 20:45:47.400158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71904 len:8 PRP1 0x0 PRP2 0x0 00:26:05.179 [2024-12-05 20:45:47.400166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.179 [2024-12-05 20:45:47.400174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.179 [2024-12-05 20:45:47.400180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.179 [2024-12-05 20:45:47.400188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71912 len:8 PRP1 0x0 PRP2 0x0 00:26:05.179 [2024-12-05 20:45:47.400196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.179 [2024-12-05 20:45:47.400207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.179 [2024-12-05 20:45:47.400213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.179 [2024-12-05 20:45:47.400220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71920 len:8 PRP1 0x0 PRP2 0x0 00:26:05.179 [2024-12-05 20:45:47.400227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.179 [2024-12-05 20:45:47.400236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.179 [2024-12-05 20:45:47.400242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.179 [2024-12-05 20:45:47.400248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71928 len:8 PRP1 0x0 PRP2 0x0 00:26:05.179 [2024-12-05 20:45:47.400256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.179 [2024-12-05 20:45:47.400264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.179 [2024-12-05 20:45:47.400270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.179 [2024-12-05 20:45:47.400276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71936 len:8 PRP1 0x0 PRP2 0x0 00:26:05.179 [2024-12-05 20:45:47.400284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.179 [2024-12-05 20:45:47.400294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.179 [2024-12-05 20:45:47.400300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.179 [2024-12-05 20:45:47.400307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71944 len:8 PRP1 0x0 PRP2 0x0 00:26:05.179 [2024-12-05 20:45:47.400314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.179 [2024-12-05 20:45:47.400322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.179 [2024-12-05 20:45:47.400329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.179 [2024-12-05 20:45:47.400335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71952 len:8 PRP1 0x0 PRP2 0x0 00:26:05.179 [2024-12-05 20:45:47.400343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.179 [2024-12-05 20:45:47.400351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.179 [2024-12-05 20:45:47.400358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.179 [2024-12-05 20:45:47.400364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71960 len:8 PRP1 0x0 PRP2 0x0 00:26:05.179 [2024-12-05 20:45:47.400372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.179 [2024-12-05 20:45:47.400380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.179 [2024-12-05 20:45:47.400386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.179 [2024-12-05 20:45:47.400393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71968 len:8 PRP1 0x0 PRP2 0x0 00:26:05.179 [2024-12-05 20:45:47.400402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.179 [2024-12-05 20:45:47.400410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.179 [2024-12-05 20:45:47.400416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.179 [2024-12-05 20:45:47.400422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71976 len:8 PRP1 0x0 PRP2 0x0 00:26:05.179 [2024-12-05 20:45:47.400432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.179 [2024-12-05 20:45:47.400440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.179 [2024-12-05 20:45:47.400447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.179 [2024-12-05 20:45:47.400453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71984 len:8 PRP1 0x0 PRP2 0x0 00:26:05.179 [2024-12-05 20:45:47.400461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.179 [2024-12-05 20:45:47.400471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.179 [2024-12-05 20:45:47.400478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.179 [2024-12-05 20:45:47.400485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71992 len:8 PRP1 0x0 PRP2 0x0 00:26:05.179 [2024-12-05 20:45:47.400492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.179 [2024-12-05 20:45:47.400500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.179 [2024-12-05 20:45:47.400506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.179 [2024-12-05 20:45:47.400513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72000 len:8 PRP1 0x0 PRP2 0x0 00:26:05.179 [2024-12-05 20:45:47.400521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.179 [2024-12-05 20:45:47.400529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.179 [2024-12-05 20:45:47.400534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.179 [2024-12-05 20:45:47.400541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72008 len:8 PRP1 0x0 PRP2 0x0 00:26:05.179 [2024-12-05 20:45:47.400548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.179 [2024-12-05 20:45:47.400556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.179 [2024-12-05 20:45:47.400562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.179 [2024-12-05 20:45:47.400569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72016 len:8 PRP1 0x0 PRP2 0x0 00:26:05.179 [2024-12-05 20:45:47.400576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.179 [2024-12-05 20:45:47.400584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.179 [2024-12-05 20:45:47.400590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.179 [2024-12-05 20:45:47.400597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72024 len:8 PRP1 0x0 PRP2 0x0 00:26:05.180 [2024-12-05 20:45:47.400604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.180 [2024-12-05 20:45:47.400612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.180 [2024-12-05 20:45:47.400618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.180 [2024-12-05 20:45:47.400624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72032 len:8 PRP1 0x0 PRP2 0x0 00:26:05.180 [2024-12-05 20:45:47.400632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.180 [2024-12-05 20:45:47.400640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.180 [2024-12-05 20:45:47.400646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.180 [2024-12-05 20:45:47.400654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72040 len:8 PRP1 0x0 PRP2 0x0 00:26:05.180 [2024-12-05 20:45:47.400661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.180 [2024-12-05 20:45:47.400670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.180 [2024-12-05 20:45:47.400677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.180 [2024-12-05 20:45:47.400683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72048 len:8 PRP1 0x0 PRP2 0x0 00:26:05.180 [2024-12-05 20:45:47.400690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.180 [2024-12-05 20:45:47.400698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.180 [2024-12-05 20:45:47.400705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.180 [2024-12-05 20:45:47.400711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72056 len:8 PRP1 0x0 PRP2 0x0 00:26:05.180 [2024-12-05 20:45:47.400718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.180 [2024-12-05 20:45:47.400726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.180 [2024-12-05 20:45:47.400732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.180 [2024-12-05 20:45:47.400739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72064 len:8 PRP1 0x0 PRP2 0x0 00:26:05.180 [2024-12-05 20:45:47.400747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.180 [2024-12-05 20:45:47.400755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.180 [2024-12-05 20:45:47.400761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.180 [2024-12-05 20:45:47.400767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72072 len:8 PRP1 0x0 PRP2 0x0 00:26:05.180 [2024-12-05 20:45:47.400775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.180 [2024-12-05 20:45:47.400783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.180 [2024-12-05 20:45:47.400788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.180 [2024-12-05 20:45:47.400795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72080 len:8 PRP1 0x0 PRP2 0x0 00:26:05.180 [2024-12-05 20:45:47.400802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.180 [2024-12-05 20:45:47.400810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.180 [2024-12-05 20:45:47.400816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.180 [2024-12-05 20:45:47.400822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72088 len:8 PRP1 0x0 PRP2 0x0 00:26:05.180 [2024-12-05 20:45:47.400830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.180 [2024-12-05 20:45:47.400838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.180 [2024-12-05 20:45:47.400844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.180 [2024-12-05 20:45:47.400850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72096 len:8 PRP1 0x0 PRP2 0x0 00:26:05.180 [2024-12-05 20:45:47.400858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.180 [2024-12-05 20:45:47.400866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.180 [2024-12-05 20:45:47.400874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.180 [2024-12-05 20:45:47.400880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72104 len:8 PRP1 0x0 PRP2 0x0 00:26:05.180 [2024-12-05 20:45:47.400888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.180 [2024-12-05 20:45:47.400896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.180 [2024-12-05 20:45:47.400903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.180 [2024-12-05 20:45:47.400909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72112 len:8 PRP1 0x0 PRP2 0x0 00:26:05.180 [2024-12-05 20:45:47.400916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.180 [2024-12-05 20:45:47.400925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.180 [2024-12-05 20:45:47.400931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.180 [2024-12-05 20:45:47.400937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72120 len:8 PRP1 0x0 PRP2 0x0 00:26:05.180 [2024-12-05 20:45:47.400945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.180 [2024-12-05 20:45:47.400953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.180 [2024-12-05 20:45:47.400959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.180 [2024-12-05 20:45:47.400965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72128 len:8 PRP1 0x0 PRP2 0x0 00:26:05.180 [2024-12-05 20:45:47.400976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.180 [2024-12-05 20:45:47.400984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.180 [2024-12-05 20:45:47.400990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.181 [2024-12-05 20:45:47.400996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72136 len:8 PRP1 0x0 PRP2 0x0 00:26:05.181 [2024-12-05 20:45:47.401004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.181 [2024-12-05 20:45:47.401012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.181 [2024-12-05 20:45:47.401018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.181 [2024-12-05 20:45:47.401024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72144 len:8 PRP1 0x0 PRP2 0x0 00:26:05.181 [2024-12-05 20:45:47.401031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.181 [2024-12-05 20:45:47.401040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.181 [2024-12-05 20:45:47.401046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.181 [2024-12-05 20:45:47.401052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72152 len:8 PRP1 0x0 PRP2 0x0 00:26:05.181 [2024-12-05 20:45:47.401064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.181 [2024-12-05 20:45:47.401073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.181 [2024-12-05 20:45:47.401079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.181 [2024-12-05 20:45:47.401086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72160 len:8 PRP1 0x0 PRP2 0x0 00:26:05.181 [2024-12-05 20:45:47.401093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.181 [2024-12-05 20:45:47.401103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.181 [2024-12-05 20:45:47.401109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.181 [2024-12-05 20:45:47.401116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72168 len:8 PRP1 0x0 PRP2 0x0 00:26:05.181 [2024-12-05 20:45:47.401123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.181 [2024-12-05 20:45:47.401131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.181 [2024-12-05 20:45:47.401137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.181 [2024-12-05 20:45:47.401143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72176 len:8 PRP1 0x0 PRP2 0x0 00:26:05.181 [2024-12-05 20:45:47.401151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.181 [2024-12-05 20:45:47.401159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.181 [2024-12-05 20:45:47.401165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.181 [2024-12-05 20:45:47.401172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72184 len:8 PRP1 0x0 PRP2 0x0 00:26:05.181 [2024-12-05 20:45:47.401180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.181 [2024-12-05 20:45:47.401188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.181 [2024-12-05 20:45:47.401194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.181 [2024-12-05 20:45:47.401200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72192 len:8 PRP1 0x0 PRP2 0x0 00:26:05.181 [2024-12-05 20:45:47.401209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.181 [2024-12-05 20:45:47.401217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.181 [2024-12-05 20:45:47.401223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.181 [2024-12-05 20:45:47.401229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72200 len:8 PRP1 0x0 PRP2 0x0 00:26:05.181 [2024-12-05 20:45:47.401237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.181 [2024-12-05 20:45:47.401245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.181 [2024-12-05 20:45:47.401251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.181 [2024-12-05 20:45:47.401258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72208 len:8 PRP1 0x0 PRP2 0x0 00:26:05.181 [2024-12-05 20:45:47.401265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.181 [2024-12-05 20:45:47.401273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.181 [2024-12-05 20:45:47.401279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.181 [2024-12-05 20:45:47.401286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72216 len:8 PRP1 0x0 PRP2 0x0 00:26:05.181 [2024-12-05 20:45:47.401293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.181 [2024-12-05 20:45:47.401301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.181 [2024-12-05 20:45:47.401307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.181 [2024-12-05 20:45:47.401314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72224 len:8 PRP1 0x0 PRP2 0x0 00:26:05.181 [2024-12-05 20:45:47.401323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.181 [2024-12-05 20:45:47.401331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.181 [2024-12-05 20:45:47.401337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.181 [2024-12-05 20:45:47.401344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72232 len:8 PRP1 0x0 PRP2 0x0 00:26:05.181 [2024-12-05 20:45:47.401351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.181 [2024-12-05 20:45:47.401359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.181 [2024-12-05 20:45:47.401365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.181 [2024-12-05 20:45:47.401372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72240 len:8 PRP1 0x0 PRP2 0x0 00:26:05.181 [2024-12-05 20:45:47.401381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.181 [2024-12-05 20:45:47.401389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.181 [2024-12-05 20:45:47.401395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.181 [2024-12-05 20:45:47.401401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72248 len:8 PRP1 0x0 PRP2 0x0 00:26:05.181 [2024-12-05 20:45:47.401409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.181 [2024-12-05 20:45:47.401417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.181 [2024-12-05 20:45:47.401423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.181 [2024-12-05 20:45:47.401429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72256 len:8 PRP1 0x0 PRP2 0x0 00:26:05.181 [2024-12-05 20:45:47.401438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.181 [2024-12-05 20:45:47.401446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.181 [2024-12-05 20:45:47.401452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.181 [2024-12-05 20:45:47.401459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72264 len:8 PRP1 0x0 PRP2 0x0 00:26:05.182 [2024-12-05 20:45:47.401466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.182 [2024-12-05 20:45:47.401474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.182 [2024-12-05 20:45:47.401480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.182 [2024-12-05 20:45:47.401487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72272 len:8 PRP1 0x0 PRP2 0x0 00:26:05.182 [2024-12-05 20:45:47.401494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.182 [2024-12-05 20:45:47.401502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.182 [2024-12-05 20:45:47.401508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.182 [2024-12-05 20:45:47.401515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72280 len:8 PRP1 0x0 PRP2 0x0 00:26:05.182 [2024-12-05 20:45:47.401523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.182 [2024-12-05 20:45:47.401530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.182 [2024-12-05 20:45:47.401536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.182 [2024-12-05 20:45:47.401544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72288 len:8 PRP1 0x0 PRP2 0x0 00:26:05.182 [2024-12-05 20:45:47.401552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.182 [2024-12-05 20:45:47.401560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.182 [2024-12-05 20:45:47.401566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.182 [2024-12-05 20:45:47.401572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72296 len:8 PRP1 0x0 PRP2 0x0 00:26:05.182 [2024-12-05 20:45:47.408356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.182 [2024-12-05 20:45:47.408368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.182 [2024-12-05 20:45:47.408375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.182 [2024-12-05 20:45:47.408383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72304 len:8 PRP1 0x0 PRP2 0x0 00:26:05.182 [2024-12-05 20:45:47.408392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.182 [2024-12-05 20:45:47.408400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.182 [2024-12-05 20:45:47.408406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.182 [2024-12-05 20:45:47.408413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72312 len:8 PRP1 0x0 PRP2 0x0 00:26:05.182 [2024-12-05 20:45:47.408420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.182 [2024-12-05 20:45:47.408429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.182 [2024-12-05 20:45:47.408435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.182 [2024-12-05 20:45:47.408441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72320 len:8 PRP1 0x0 PRP2 0x0 00:26:05.182 [2024-12-05 20:45:47.408450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.182 [2024-12-05 20:45:47.408458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.182 [2024-12-05 20:45:47.408464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.182 [2024-12-05 20:45:47.408471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72328 len:8 PRP1 0x0 PRP2 0x0 00:26:05.182 [2024-12-05 20:45:47.408478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.182 [2024-12-05 20:45:47.408487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.182 [2024-12-05 20:45:47.408492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.182 [2024-12-05 20:45:47.408499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72336 len:8 PRP1 0x0 PRP2 0x0 00:26:05.182 [2024-12-05 20:45:47.408507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.182 [2024-12-05 20:45:47.408515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.182 [2024-12-05 20:45:47.408521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.182 [2024-12-05 20:45:47.408527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72344 len:8 PRP1 0x0 PRP2 0x0 00:26:05.182 [2024-12-05 20:45:47.408535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.182 [2024-12-05 20:45:47.408545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.182 [2024-12-05 20:45:47.408551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.182 [2024-12-05 20:45:47.408558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72352 len:8 PRP1 0x0 PRP2 0x0 00:26:05.182 [2024-12-05 20:45:47.408565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.182 [2024-12-05 20:45:47.408573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.182 [2024-12-05 20:45:47.408580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.182 [2024-12-05 20:45:47.408586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72360 len:8 PRP1 0x0 PRP2 0x0 00:26:05.182 [2024-12-05 20:45:47.408594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.182 [2024-12-05 20:45:47.408602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.182 [2024-12-05 20:45:47.408608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.182 [2024-12-05 20:45:47.408615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72368 len:8 PRP1 0x0 PRP2 0x0 00:26:05.182 [2024-12-05 20:45:47.408623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.182 [2024-12-05 20:45:47.408631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.182 [2024-12-05 20:45:47.408637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.182 [2024-12-05 20:45:47.408643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72376 len:8 PRP1 0x0 PRP2 0x0 00:26:05.182 [2024-12-05 20:45:47.408651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.182 [2024-12-05 20:45:47.408659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.182 [2024-12-05 20:45:47.408665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.182 [2024-12-05 20:45:47.408672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72384 len:8 PRP1 0x0 PRP2 0x0 00:26:05.182 [2024-12-05 20:45:47.408680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.182 [2024-12-05 20:45:47.408688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.183 [2024-12-05 20:45:47.408694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.183 [2024-12-05 20:45:47.408700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72392 len:8 PRP1 0x0 PRP2 0x0 00:26:05.183 [2024-12-05 20:45:47.408708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.183 [2024-12-05 20:45:47.408716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.183 [2024-12-05 20:45:47.408722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.183 [2024-12-05 20:45:47.408728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72400 len:8 PRP1 0x0 PRP2 0x0 00:26:05.183 [2024-12-05 20:45:47.408736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.183 [2024-12-05 20:45:47.408744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.183 [2024-12-05 20:45:47.408751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.183 [2024-12-05 20:45:47.408757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72408 len:8 PRP1 0x0 PRP2 0x0 00:26:05.183 [2024-12-05 20:45:47.408766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.183 [2024-12-05 20:45:47.408774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.183 [2024-12-05 20:45:47.408780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.183 [2024-12-05 20:45:47.408787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72416 len:8 PRP1 0x0 PRP2 0x0 00:26:05.183 [2024-12-05 20:45:47.408794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.183 [2024-12-05 20:45:47.408802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.183 [2024-12-05 20:45:47.408808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.183 [2024-12-05 20:45:47.408815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72424 len:8 PRP1 0x0 PRP2 0x0 00:26:05.183 [2024-12-05 20:45:47.408822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.183 [2024-12-05 20:45:47.408830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.183 [2024-12-05 20:45:47.408837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.183 [2024-12-05 20:45:47.408843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72432 len:8 PRP1 0x0 PRP2 0x0 00:26:05.183 [2024-12-05 20:45:47.408851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.183 [2024-12-05 20:45:47.408859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.183 [2024-12-05 20:45:47.408865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.183 [2024-12-05 20:45:47.408872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72440 len:8 PRP1 0x0 PRP2 0x0 00:26:05.183 [2024-12-05 20:45:47.408879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.183 [2024-12-05 20:45:47.408887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.183 [2024-12-05 20:45:47.408893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.183 [2024-12-05 20:45:47.408900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71672 len:8 PRP1 0x0 PRP2 0x0 00:26:05.183 [2024-12-05 20:45:47.408907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.183 [2024-12-05 20:45:47.408916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.183 [2024-12-05 20:45:47.408922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.183 [2024-12-05 20:45:47.408929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71680 len:8 PRP1 0x0 PRP2 0x0 00:26:05.183 [2024-12-05 20:45:47.408937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.183 [2024-12-05 20:45:47.408944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.183 [2024-12-05 20:45:47.408950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.183 [2024-12-05 20:45:47.408957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72448 len:8 PRP1 0x0 PRP2 0x0 00:26:05.183 [2024-12-05 20:45:47.408965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.183 [2024-12-05 20:45:47.408973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.183 [2024-12-05 20:45:47.408979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.183 [2024-12-05 20:45:47.408987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72456 len:8 PRP1 0x0 PRP2 0x0 00:26:05.183 [2024-12-05 20:45:47.408995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.183 [2024-12-05 20:45:47.409040] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:05.183 [2024-12-05 20:45:47.409052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:05.183 [2024-12-05 20:45:47.409095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20eb480 (9): Bad file descriptor 00:26:05.183 [2024-12-05 20:45:47.413758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:05.183 [2024-12-05 20:45:47.475446] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:26:05.183 12093.60 IOPS, 47.24 MiB/s [2024-12-05T19:45:58.624Z] 12128.33 IOPS, 47.38 MiB/s [2024-12-05T19:45:58.624Z] 12172.29 IOPS, 47.55 MiB/s [2024-12-05T19:45:58.624Z] 12193.38 IOPS, 47.63 MiB/s [2024-12-05T19:45:58.624Z] [2024-12-05 20:45:51.807132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.183 [2024-12-05 20:45:51.807168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.183 [2024-12-05 20:45:51.807182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.183 [2024-12-05 20:45:51.807189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.183 [2024-12-05 20:45:51.807197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.183 [2024-12-05 20:45:51.807204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.183 [2024-12-05 20:45:51.807212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.183 [2024-12-05 20:45:51.807218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.183 [2024-12-05 20:45:51.807225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.183 [2024-12-05 20:45:51.807231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.183 [2024-12-05 20:45:51.807238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.183 [2024-12-05 20:45:51.807245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-12-05 20:45:51.807253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.184 [2024-12-05 20:45:51.807259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-12-05 20:45:51.807266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.184 [2024-12-05 20:45:51.807272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-12-05 20:45:51.807279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.184 [2024-12-05 20:45:51.807285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-12-05 20:45:51.807297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.184 [2024-12-05 20:45:51.807303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-12-05 20:45:51.807310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.184 [2024-12-05 20:45:51.807316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-12-05 20:45:51.807323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.184 [2024-12-05 20:45:51.807330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-12-05 20:45:51.807337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.184 [2024-12-05 20:45:51.807343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-12-05 20:45:51.807350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.184 [2024-12-05 20:45:51.807356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-12-05 20:45:51.807363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.184 [2024-12-05 20:45:51.807369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-12-05 20:45:51.807376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.184 [2024-12-05 20:45:51.807382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-12-05 20:45:51.807389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.184 [2024-12-05 20:45:51.807395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-12-05 20:45:51.807402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.184 [2024-12-05 20:45:51.807408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-12-05 20:45:51.807415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.184 [2024-12-05 20:45:51.807422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-12-05 20:45:51.807429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.184 [2024-12-05 20:45:51.807435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-12-05 20:45:51.807442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.184 [2024-12-05 20:45:51.807448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-12-05 20:45:51.807455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.184 [2024-12-05 20:45:51.807461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-12-05 20:45:51.807472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.184 [2024-12-05 20:45:51.807478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-12-05 20:45:51.807486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.184 [2024-12-05 20:45:51.807491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-12-05 20:45:51.807499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.184 [2024-12-05 20:45:51.807504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-12-05 20:45:51.807511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.184 [2024-12-05 20:45:51.807517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-12-05 20:45:51.807524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.184 [2024-12-05 20:45:51.807530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-12-05 20:45:51.807538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.184 [2024-12-05 20:45:51.807544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-12-05 20:45:51.807551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.184 [2024-12-05 20:45:51.807557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-12-05 20:45:51.807564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.184 [2024-12-05 20:45:51.807570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-12-05 20:45:51.807577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.184 [2024-12-05 20:45:51.807583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-12-05 20:45:51.807590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.184 [2024-12-05 20:45:51.807596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-12-05 20:45:51.807603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.184 [2024-12-05 20:45:51.807610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-12-05 20:45:51.807617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.185 [2024-12-05 20:45:51.807623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.185 [2024-12-05 20:45:51.807631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.185 [2024-12-05 20:45:51.807638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.185 [2024-12-05 20:45:51.807646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.185 [2024-12-05 20:45:51.807651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.185 [2024-12-05 20:45:51.807658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.185 [2024-12-05 20:45:51.807664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.185 [2024-12-05 20:45:51.807672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.185 [2024-12-05 20:45:51.807677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.185 [2024-12-05 20:45:51.807684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.185 [2024-12-05 20:45:51.807690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.185 [2024-12-05 20:45:51.807698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.185 [2024-12-05 20:45:51.807704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.185 [2024-12-05 20:45:51.807711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.185 [2024-12-05 20:45:51.807716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.185 [2024-12-05 20:45:51.807723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.185 [2024-12-05 20:45:51.807729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.185 [2024-12-05 20:45:51.807737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.185 [2024-12-05 20:45:51.807742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.185 [2024-12-05 20:45:51.807749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.185 [2024-12-05 20:45:51.807755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.185 [2024-12-05 20:45:51.807762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.185 [2024-12-05 20:45:51.807768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.185 [2024-12-05 20:45:51.807775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.185 [2024-12-05 20:45:51.807782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.185 [2024-12-05 20:45:51.807790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.185 [2024-12-05 20:45:51.807796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.185 [2024-12-05 20:45:51.807803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.185 [2024-12-05 20:45:51.807810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.185 [2024-12-05 20:45:51.807817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.185 [2024-12-05 20:45:51.807828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.185 [2024-12-05 20:45:51.807836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.185 [2024-12-05 20:45:51.807842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.185 [2024-12-05 20:45:51.807849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.185 [2024-12-05 20:45:51.807854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.185 [2024-12-05 20:45:51.807861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.185 [2024-12-05 20:45:51.807867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.185 [2024-12-05 20:45:51.807874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.185 [2024-12-05 20:45:51.807880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.185 [2024-12-05 20:45:51.807888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.185 [2024-12-05 20:45:51.807894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.185 [2024-12-05 20:45:51.807901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.185 [2024-12-05 20:45:51.807907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.185 [2024-12-05 20:45:51.807914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.185 [2024-12-05 20:45:51.807920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.185 [2024-12-05 20:45:51.807927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.185 [2024-12-05 20:45:51.807933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.185 [2024-12-05 20:45:51.807940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.185 [2024-12-05 20:45:51.807945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.185 [2024-12-05 20:45:51.807952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.185 [2024-12-05 20:45:51.807958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.185 [2024-12-05 20:45:51.807965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.185 [2024-12-05 20:45:51.807971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.186 [2024-12-05 20:45:51.807979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.186 [2024-12-05 20:45:51.807985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.186 [2024-12-05 20:45:51.807993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.186 [2024-12-05 20:45:51.807998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.186 [2024-12-05 20:45:51.808005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.186 [2024-12-05 20:45:51.808011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.186 [2024-12-05 20:45:51.808018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.186 [2024-12-05 20:45:51.808024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.186 [2024-12-05 20:45:51.808031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.186 [2024-12-05 20:45:51.808038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.186 [2024-12-05 20:45:51.808045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.186 [2024-12-05 20:45:51.808051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.186 [2024-12-05 20:45:51.808062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.186 [2024-12-05 20:45:51.808068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.186 [2024-12-05 20:45:51.808076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.186 [2024-12-05 20:45:51.808082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.186 [2024-12-05 20:45:51.808089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.186 [2024-12-05 20:45:51.808095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.186 [2024-12-05 20:45:51.808102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.186 [2024-12-05 20:45:51.808108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.186 [2024-12-05 20:45:51.808115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.186 [2024-12-05 20:45:51.808121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.186 [2024-12-05 20:45:51.808128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.186 [2024-12-05 20:45:51.808134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.186 [2024-12-05 20:45:51.808141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.186 [2024-12-05 20:45:51.808148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.186 [2024-12-05 20:45:51.808156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.186 [2024-12-05 20:45:51.808161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.186 [2024-12-05 20:45:51.808168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.186 [2024-12-05 20:45:51.808174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.186 [2024-12-05 20:45:51.808181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.186 [2024-12-05 20:45:51.808187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.186 [2024-12-05 20:45:51.808194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.186 [2024-12-05 20:45:51.808200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.186 [2024-12-05 20:45:51.808207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.186 [2024-12-05 20:45:51.808212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.186 [2024-12-05 20:45:51.808219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.186 [2024-12-05 20:45:51.808225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.186 [2024-12-05 20:45:51.808232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.186 [2024-12-05 20:45:51.808238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.186 [2024-12-05 20:45:51.808245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.186 [2024-12-05 20:45:51.808254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.186 [2024-12-05 20:45:51.808262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.186 [2024-12-05 20:45:51.808268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.186 [2024-12-05 20:45:51.808275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.186 [2024-12-05 20:45:51.808281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.186 [2024-12-05 20:45:51.808288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.186 [2024-12-05 20:45:51.808294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.187 [2024-12-05 20:45:51.808301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.187 [2024-12-05 20:45:51.808307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.187 [2024-12-05 20:45:51.808314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.187 [2024-12-05 20:45:51.808321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.187 [2024-12-05 20:45:51.808328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.187 [2024-12-05 20:45:51.808334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.187 [2024-12-05 20:45:51.808341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.187 [2024-12-05 20:45:51.808346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.187 [2024-12-05 20:45:51.808354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.187 [2024-12-05 20:45:51.808360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.187 [2024-12-05 20:45:51.808367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.187 [2024-12-05 20:45:51.808373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.187 [2024-12-05 20:45:51.808379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.187 [2024-12-05 20:45:51.808385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.187 [2024-12-05 20:45:51.808392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.187 [2024-12-05 20:45:51.808398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.187 [2024-12-05 20:45:51.808405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.187 [2024-12-05 20:45:51.808411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.187 [2024-12-05 20:45:51.808418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.187 [2024-12-05 20:45:51.808423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.187 [2024-12-05 20:45:51.808430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.187 [2024-12-05 20:45:51.808436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.187 [2024-12-05 20:45:51.808443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.187 [2024-12-05 20:45:51.808449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.187 [2024-12-05 20:45:51.808457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.187 [2024-12-05 20:45:51.808464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.187 [2024-12-05 20:45:51.808471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.187 [2024-12-05 20:45:51.808477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.187 [2024-12-05 20:45:51.808485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.187 [2024-12-05 20:45:51.808491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.187 [2024-12-05 20:45:51.808497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.187 [2024-12-05 20:45:51.808503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.187 [2024-12-05 20:45:51.808510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.187 [2024-12-05 20:45:51.808516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.187 [2024-12-05 20:45:51.808523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.187 [2024-12-05 20:45:51.808529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.187 [2024-12-05 20:45:51.808536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.187 [2024-12-05 20:45:51.808541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.187 [2024-12-05 20:45:51.808548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.187 [2024-12-05 20:45:51.808554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.187 [2024-12-05 20:45:51.808561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.187 [2024-12-05 20:45:51.808567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.187 [2024-12-05 20:45:51.808574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.187 [2024-12-05 20:45:51.808579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.187 [2024-12-05 20:45:51.808586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.187 [2024-12-05 20:45:51.808592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.187 [2024-12-05 20:45:51.808599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.187 [2024-12-05 20:45:51.808604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.187 [2024-12-05 20:45:51.808612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.187 [2024-12-05 20:45:51.808618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.187 [2024-12-05 20:45:51.808625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.187 [2024-12-05 20:45:51.808631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.188 [2024-12-05 20:45:51.808637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.188 [2024-12-05 20:45:51.808646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.188 [2024-12-05 20:45:51.808654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.188 [2024-12-05 20:45:51.808660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.188 [2024-12-05 20:45:51.808667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.188 [2024-12-05 20:45:51.808674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.188 [2024-12-05 20:45:51.808682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.188 [2024-12-05 20:45:51.808687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.188 [2024-12-05 20:45:51.808694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.188 [2024-12-05 20:45:51.808700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.188 [2024-12-05 20:45:51.808708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.188 [2024-12-05 20:45:51.808713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.188 [2024-12-05 20:45:51.808721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.188 [2024-12-05 20:45:51.808726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.188 [2024-12-05 20:45:51.808733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.188 [2024-12-05 20:45:51.808739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.188 [2024-12-05 20:45:51.808747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.188 [2024-12-05 20:45:51.808753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.188 [2024-12-05 20:45:51.808772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.188 [2024-12-05 20:45:51.808779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2120 len:8 PRP1 0x0 PRP2 0x0 00:26:05.188 [2024-12-05 20:45:51.808785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.188 [2024-12-05 20:45:51.808793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.188 [2024-12-05 20:45:51.808798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.188 [2024-12-05 20:45:51.808803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2128 len:8 PRP1 0x0 PRP2 0x0 00:26:05.188 [2024-12-05 20:45:51.808809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.188 [2024-12-05 20:45:51.808815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.188 [2024-12-05 20:45:51.808819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.188 [2024-12-05 20:45:51.808824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2136 len:8 PRP1 0x0 PRP2 0x0 00:26:05.188 [2024-12-05 20:45:51.808831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.188 [2024-12-05 20:45:51.808837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.188 [2024-12-05 20:45:51.808842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.188 [2024-12-05 20:45:51.808847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:8 PRP1 0x0 PRP2 0x0 00:26:05.188 [2024-12-05 20:45:51.808852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.188 [2024-12-05 20:45:51.808859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.188 [2024-12-05 20:45:51.808864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.188 [2024-12-05 20:45:51.808869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2152 len:8 PRP1 0x0 PRP2 0x0 00:26:05.188 [2024-12-05 20:45:51.808874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.188 [2024-12-05 20:45:51.808881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.188 [2024-12-05 20:45:51.808885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.188 [2024-12-05 20:45:51.808890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2160 len:8 PRP1 0x0 PRP2 0x0 00:26:05.188 [2024-12-05 20:45:51.808896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.188 [2024-12-05 20:45:51.808902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.188 [2024-12-05 20:45:51.808906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.188 [2024-12-05 20:45:51.808911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2168 len:8 PRP1 0x0 PRP2 0x0 00:26:05.188 [2024-12-05 20:45:51.808916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.188 [2024-12-05 20:45:51.808922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.188 [2024-12-05 20:45:51.808926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.188 [2024-12-05 20:45:51.808931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:8 PRP1 0x0 PRP2 0x0 00:26:05.188 [2024-12-05 20:45:51.808937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.188 [2024-12-05 20:45:51.808942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.188 [2024-12-05 20:45:51.808947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.188 [2024-12-05 20:45:51.808952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2184 len:8 PRP1 0x0 PRP2 0x0 00:26:05.188 [2024-12-05 20:45:51.808957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.188 [2024-12-05 20:45:51.808998] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:05.188 [2024-12-05 20:45:51.809018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.188 [2024-12-05 20:45:51.809025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.188 [2024-12-05 20:45:51.809033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.188 [2024-12-05 20:45:51.809039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.189 [2024-12-05 20:45:51.809047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.189 [2024-12-05 20:45:51.809052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.189 [2024-12-05 20:45:51.809064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.189 [2024-12-05 20:45:51.809070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.189 [2024-12-05 20:45:51.809076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:05.189 [2024-12-05 20:45:51.819279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20eb480 (9): Bad file descriptor 00:26:05.189 [2024-12-05 20:45:51.822757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:05.189 12204.00 IOPS, 47.67 MiB/s [2024-12-05T19:45:58.630Z] [2024-12-05 20:45:51.845128] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:26:05.189 12203.50 IOPS, 47.67 MiB/s [2024-12-05T19:45:58.630Z] 12220.91 IOPS, 47.74 MiB/s [2024-12-05T19:45:58.630Z] 12242.25 IOPS, 47.82 MiB/s [2024-12-05T19:45:58.630Z] 12245.15 IOPS, 47.83 MiB/s [2024-12-05T19:45:58.630Z] 12242.43 IOPS, 47.82 MiB/s 00:26:05.189 Latency(us) 00:26:05.189 [2024-12-05T19:45:58.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.189 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:05.189 Verification LBA range: start 0x0 length 0x4000 00:26:05.189 NVMe0n1 : 15.00 12260.18 47.89 408.73 0.00 10083.60 400.29 28359.21 00:26:05.189 [2024-12-05T19:45:58.630Z] =================================================================================================================== 00:26:05.189 [2024-12-05T19:45:58.630Z] Total : 12260.18 47.89 408.73 0.00 10083.60 400.29 28359.21 00:26:05.189 Received shutdown signal, test time was about 15.000000 seconds 00:26:05.189 00:26:05.189 Latency(us) 00:26:05.189 [2024-12-05T19:45:58.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.189 [2024-12-05T19:45:58.630Z] =================================================================================================================== 00:26:05.189 [2024-12-05T19:45:58.630Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:05.189 20:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:05.189 20:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:05.189 20:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:05.189 20:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=471609 00:26:05.189 20:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:05.189 20:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 471609 /var/tmp/bdevperf.sock 00:26:05.189 20:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 471609 ']' 00:26:05.189 20:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:05.189 20:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:05.189 20:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:05.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:05.189 20:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:05.189 20:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:05.189 20:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:05.189 20:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:05.189 20:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:05.189 [2024-12-05 20:45:58.485653] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:05.189 20:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:05.449 [2024-12-05 20:45:58.678176] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:05.449 20:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:05.707 NVMe0n1 00:26:05.707 20:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:05.966 00:26:05.966 20:45:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:06.224 00:26:06.225 20:45:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:06.225 20:45:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:06.225 20:45:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:06.483 20:45:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:09.769 20:46:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:09.769 20:46:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:09.769 20:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=472542 00:26:09.769 20:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:09.769 20:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 472542 00:26:10.706 { 00:26:10.706 "results": [ 00:26:10.706 { 00:26:10.706 "job": "NVMe0n1", 00:26:10.706 "core_mask": "0x1", 00:26:10.706 "workload": "verify", 00:26:10.706 "status": "finished", 00:26:10.706 "verify_range": { 00:26:10.706 "start": 0, 00:26:10.706 "length": 16384 00:26:10.706 }, 00:26:10.706 "queue_depth": 128, 00:26:10.706 "io_size": 4096, 00:26:10.706 "runtime": 1.010543, 00:26:10.706 "iops": 12406.201418445331, 00:26:10.706 "mibps": 48.461724290802074, 00:26:10.706 "io_failed": 0, 00:26:10.706 "io_timeout": 0, 00:26:10.706 "avg_latency_us": 10279.56968797813, 00:26:10.706 "min_latency_us": 2189.498181818182, 00:26:10.706 "max_latency_us": 9532.50909090909 00:26:10.706 } 00:26:10.706 ], 00:26:10.706 "core_count": 1 00:26:10.706 } 00:26:10.706 20:46:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:10.706 [2024-12-05 20:45:58.120422] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:26:10.706 [2024-12-05 20:45:58.120466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid471609 ] 00:26:10.706 [2024-12-05 20:45:58.190762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.706 [2024-12-05 20:45:58.224968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.706 [2024-12-05 20:45:59.780745] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:10.706 [2024-12-05 20:45:59.780787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:10.706 [2024-12-05 20:45:59.780797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.706 [2024-12-05 20:45:59.780806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:10.706 [2024-12-05 20:45:59.780812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.706 [2024-12-05 20:45:59.780819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:10.706 [2024-12-05 20:45:59.780825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.706 [2024-12-05 20:45:59.780832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:10.706 [2024-12-05 20:45:59.780838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.707 [2024-12-05 20:45:59.780844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:26:10.707 [2024-12-05 20:45:59.780867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:26:10.707 [2024-12-05 20:45:59.780880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a6480 (9): Bad file descriptor 00:26:10.707 [2024-12-05 20:45:59.791199] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:26:10.707 Running I/O for 1 seconds... 00:26:10.707 12409.00 IOPS, 48.47 MiB/s 00:26:10.707 Latency(us) 00:26:10.707 [2024-12-05T19:46:04.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:10.707 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:10.707 Verification LBA range: start 0x0 length 0x4000 00:26:10.707 NVMe0n1 : 1.01 12406.20 48.46 0.00 0.00 10279.57 2189.50 9532.51 00:26:10.707 [2024-12-05T19:46:04.148Z] =================================================================================================================== 00:26:10.707 [2024-12-05T19:46:04.148Z] Total : 12406.20 48.46 0.00 0.00 10279.57 2189.50 9532.51 00:26:10.707 20:46:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:10.707 20:46:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:10.965 20:46:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:11.224 20:46:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:11.224 20:46:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:11.483 20:46:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:11.483 20:46:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:14.771 20:46:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:14.771 20:46:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:14.771 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 471609 00:26:14.771 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 471609 ']' 00:26:14.771 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 471609 00:26:14.771 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:14.771 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:14.771 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 471609 00:26:14.771 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:14.771 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:14.771 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 471609' 00:26:14.771 killing process with pid 471609 00:26:14.771 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 471609 00:26:14.771 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 471609 00:26:15.030 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:15.030 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:15.289 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:15.289 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:15.289 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:15.289 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:15.289 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:26:15.289 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:15.289 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:26:15.289 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:15.289 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:15.289 rmmod nvme_tcp 00:26:15.289 rmmod nvme_fabrics 00:26:15.289 rmmod nvme_keyring 00:26:15.289 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:15.289 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:26:15.289 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:26:15.289 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 468423 ']' 00:26:15.289 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 468423 00:26:15.289 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 468423 ']' 00:26:15.289 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 468423 00:26:15.289 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:15.289 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:15.289 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 468423 00:26:15.289 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:15.289 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:15.289 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 468423' 00:26:15.289 killing process with pid 468423 00:26:15.289 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 468423 00:26:15.289 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 468423 00:26:15.549 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:15.549 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:15.549 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:15.549 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:26:15.549 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:26:15.549 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:15.549 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:26:15.549 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:15.549 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:15.549 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.549 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:15.549 20:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.454 20:46:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:17.454 00:26:17.454 real 0m37.472s 00:26:17.454 user 1m57.940s 00:26:17.454 sys 0m7.900s 00:26:17.454 20:46:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:17.454 20:46:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:17.454 ************************************ 00:26:17.454 END TEST nvmf_failover 00:26:17.454 ************************************ 00:26:17.715 20:46:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:17.715 20:46:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:17.715 20:46:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:17.715 20:46:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.715 ************************************ 00:26:17.715 START TEST nvmf_host_discovery 00:26:17.715 ************************************ 00:26:17.715 20:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:17.715 * Looking for test storage... 00:26:17.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:17.715 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:17.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.715 --rc genhtml_branch_coverage=1 00:26:17.715 --rc genhtml_function_coverage=1 00:26:17.715 --rc genhtml_legend=1 00:26:17.715 --rc geninfo_all_blocks=1 00:26:17.716 --rc geninfo_unexecuted_blocks=1 00:26:17.716 00:26:17.716 ' 00:26:17.716 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:17.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.716 --rc genhtml_branch_coverage=1 00:26:17.716 --rc genhtml_function_coverage=1 00:26:17.716 --rc genhtml_legend=1 00:26:17.716 --rc geninfo_all_blocks=1 00:26:17.716 --rc geninfo_unexecuted_blocks=1 00:26:17.716 00:26:17.716 ' 00:26:17.716 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:17.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.716 --rc genhtml_branch_coverage=1 00:26:17.716 --rc genhtml_function_coverage=1 00:26:17.716 --rc genhtml_legend=1 00:26:17.716 --rc geninfo_all_blocks=1 00:26:17.716 --rc geninfo_unexecuted_blocks=1 00:26:17.716 00:26:17.716 ' 00:26:17.716 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:17.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.716 --rc genhtml_branch_coverage=1 00:26:17.716 --rc genhtml_function_coverage=1 00:26:17.716 --rc genhtml_legend=1 00:26:17.716 --rc geninfo_all_blocks=1 00:26:17.716 --rc geninfo_unexecuted_blocks=1 00:26:17.716 00:26:17.716 ' 00:26:17.716 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:17.716 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:17.716 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:17.716 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:17.716 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:17.716 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:17.716 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:17.716 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:17.716 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:17.716 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:17.716 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:17.716 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:17.716 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:17.716 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:17.716 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:17.716 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:17.716 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:17.716 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:17.716 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:17.716 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:26:17.716 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:17.716 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:17.716 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:17.716 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.716 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.716 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.716 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:17.975 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.975 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:26:17.975 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:17.975 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:17.975 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:17.975 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:17.975 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:17.975 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:17.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:17.975 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:17.975 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:17.975 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:17.975 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:17.975 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:17.975 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:17.975 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:17.975 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:17.975 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:17.976 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:17.976 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:17.976 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:17.976 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:17.976 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:17.976 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:17.976 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.976 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:17.976 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.976 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:17.976 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:17.976 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:26:17.976 20:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:24.544 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:24.544 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:24.544 Found net devices under 0000:af:00.0: cvl_0_0 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:24.544 Found net devices under 0000:af:00.1: cvl_0_1 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:24.544 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:24.545 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:24.545 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:24.545 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:24.545 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:24.545 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:24.545 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:24.545 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:24.545 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:24.545 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:24.545 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:24.545 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:24.545 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:24.545 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:24.545 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:24.545 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:24.545 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:24.545 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:24.545 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:24.545 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:24.545 20:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:24.545 20:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:24.545 20:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:24.545 20:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:24.545 20:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:24.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:24.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:26:24.545 00:26:24.545 --- 10.0.0.2 ping statistics --- 00:26:24.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.545 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:26:24.545 20:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:24.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:24.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:26:24.545 00:26:24.545 --- 10.0.0.1 ping statistics --- 00:26:24.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.545 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:26:24.545 20:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:24.545 20:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:26:24.545 20:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:24.545 20:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:24.545 20:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:24.545 20:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:24.545 20:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:24.545 20:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:24.545 20:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:24.545 20:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:24.545 20:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:24.545 20:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:24.545 20:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.545 20:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=477188 00:26:24.545 20:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 477188 00:26:24.545 20:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:24.545 20:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 477188 ']' 00:26:24.545 20:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.545 20:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:24.545 20:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.545 20:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:24.545 20:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.545 [2024-12-05 20:46:17.196900] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:26:24.545 [2024-12-05 20:46:17.196950] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:24.545 [2024-12-05 20:46:17.275651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.545 [2024-12-05 20:46:17.314193] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:24.545 [2024-12-05 20:46:17.314227] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:24.545 [2024-12-05 20:46:17.314233] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:24.545 [2024-12-05 20:46:17.314239] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:24.545 [2024-12-05 20:46:17.314243] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:24.545 [2024-12-05 20:46:17.314772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.805 [2024-12-05 20:46:18.043223] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.805 [2024-12-05 20:46:18.055369] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.805 null0 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.805 null1 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=477349 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 477349 /tmp/host.sock 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 477349 ']' 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:24.805 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:24.805 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.805 [2024-12-05 20:46:18.130259] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:26:24.805 [2024-12-05 20:46:18.130302] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid477349 ] 00:26:24.805 [2024-12-05 20:46:18.201503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.805 [2024-12-05 20:46:18.243246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.064 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:25.064 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:25.064 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:25.064 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:25.064 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.064 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.064 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.324 [2024-12-05 20:46:18.660951] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:25.324 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:25.583 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:25.583 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:25.583 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.583 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.583 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.583 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:25.583 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:25.583 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:25.583 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:25.583 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:25.583 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.583 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.583 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.583 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:25.583 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:25.583 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:25.583 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:25.583 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:25.583 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:25.583 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:25.583 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.583 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.583 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:25.583 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:25.583 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:25.583 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.583 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:26:25.583 20:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:26.151 [2024-12-05 20:46:19.401568] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:26.151 [2024-12-05 20:46:19.401584] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:26.151 [2024-12-05 20:46:19.401595] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:26.151 [2024-12-05 20:46:19.527968] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:26.409 [2024-12-05 20:46:19.622653] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:26.409 [2024-12-05 20:46:19.623453] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x11ccde0:1 started. 00:26:26.409 [2024-12-05 20:46:19.624733] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:26.409 [2024-12-05 20:46:19.624746] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:26.409 [2024-12-05 20:46:19.629941] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x11ccde0 was disconnected and freed. delete nvme_qpair. 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:26.668 20:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:26.668 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:26.668 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:26.668 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.668 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.668 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.668 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:26.668 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:26.668 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:26.668 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:26.668 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:26.668 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.668 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.668 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.668 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:26.668 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:26.668 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:26.668 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:26.668 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:26.668 [2024-12-05 20:46:20.055202] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x11cd160:1 started. 00:26:26.668 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:26.668 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:26.668 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:26.668 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:26.668 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:26.669 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.669 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.669 [2024-12-05 20:46:20.060941] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x11cd160 was disconnected and freed. delete nvme_qpair. 00:26:26.669 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.669 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:26.669 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:26.669 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:26.669 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:26.669 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:26.669 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:26.669 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:26.669 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:26.669 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:26.669 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:26.669 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:26.669 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:26.669 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.669 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.927 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.927 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:26.927 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:26.927 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:26.927 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:26.927 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:26.927 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.927 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.927 [2024-12-05 20:46:20.153015] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:26.927 [2024-12-05 20:46:20.154062] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:26.927 [2024-12-05 20:46:20.154083] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:26.927 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.927 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:26.927 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:26.927 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:26.927 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:26.927 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:26.927 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:26.927 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:26.927 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:26.927 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.927 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:26.927 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.927 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:26.927 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.927 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.927 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:26.927 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:26.927 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:26.927 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:26.927 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:26.927 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:26.928 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:26.928 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:26.928 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:26.928 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.928 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:26.928 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.928 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:26.928 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.928 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:26.928 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:26.928 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:26.928 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:26.928 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:26.928 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:26.928 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:26.928 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:26.928 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:26.928 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:26.928 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.928 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:26.928 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.928 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:26.928 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.928 [2024-12-05 20:46:20.280439] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:26.928 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:26.928 20:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:27.186 [2024-12-05 20:46:20.509658] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:26:27.186 [2024-12-05 20:46:20.509698] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:27.186 [2024-12-05 20:46:20.509707] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:27.186 [2024-12-05 20:46:20.509712] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.123 [2024-12-05 20:46:21.384854] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:28.123 [2024-12-05 20:46:21.384876] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:28.123 [2024-12-05 20:46:21.387073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.123 [2024-12-05 20:46:21.387091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.123 [2024-12-05 20:46:21.387099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.123 [2024-12-05 20:46:21.387106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.123 [2024-12-05 20:46:21.387114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.123 [2024-12-05 20:46:21.387120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.123 [2024-12-05 20:46:21.387126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.123 [2024-12-05 20:46:21.387131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.123 [2024-12-05 20:46:21.387137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119ef30 is same with the state(6) to be set 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.123 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:28.123 [2024-12-05 20:46:21.397081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119ef30 (9): Bad file descriptor 00:26:28.123 [2024-12-05 20:46:21.407116] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:28.123 [2024-12-05 20:46:21.407127] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:28.123 [2024-12-05 20:46:21.407134] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:28.123 [2024-12-05 20:46:21.407138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:28.124 [2024-12-05 20:46:21.407156] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:28.124 [2024-12-05 20:46:21.407388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.124 [2024-12-05 20:46:21.407401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119ef30 with addr=10.0.0.2, port=4420 00:26:28.124 [2024-12-05 20:46:21.407409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119ef30 is same with the state(6) to be set 00:26:28.124 [2024-12-05 20:46:21.407420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119ef30 (9): Bad file descriptor 00:26:28.124 [2024-12-05 20:46:21.407430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:28.124 [2024-12-05 20:46:21.407437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:28.124 [2024-12-05 20:46:21.407445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:28.124 [2024-12-05 20:46:21.407450] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:28.124 [2024-12-05 20:46:21.407455] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:28.124 [2024-12-05 20:46:21.407459] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:28.124 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.124 [2024-12-05 20:46:21.417186] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:28.124 [2024-12-05 20:46:21.417197] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:28.124 [2024-12-05 20:46:21.417200] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:28.124 [2024-12-05 20:46:21.417204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:28.124 [2024-12-05 20:46:21.417216] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:28.124 [2024-12-05 20:46:21.417446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.124 [2024-12-05 20:46:21.417457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119ef30 with addr=10.0.0.2, port=4420 00:26:28.124 [2024-12-05 20:46:21.417464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119ef30 is same with the state(6) to be set 00:26:28.124 [2024-12-05 20:46:21.417474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119ef30 (9): Bad file descriptor 00:26:28.124 [2024-12-05 20:46:21.417486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:28.124 [2024-12-05 20:46:21.417491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:28.124 [2024-12-05 20:46:21.417497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:28.124 [2024-12-05 20:46:21.417502] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:28.124 [2024-12-05 20:46:21.417506] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:28.124 [2024-12-05 20:46:21.417509] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:28.124 [2024-12-05 20:46:21.427247] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:28.124 [2024-12-05 20:46:21.427259] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:28.124 [2024-12-05 20:46:21.427262] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:28.124 [2024-12-05 20:46:21.427266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:28.124 [2024-12-05 20:46:21.427278] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:28.124 [2024-12-05 20:46:21.427515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.124 [2024-12-05 20:46:21.427526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119ef30 with addr=10.0.0.2, port=4420 00:26:28.124 [2024-12-05 20:46:21.427533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119ef30 is same with the state(6) to be set 00:26:28.124 [2024-12-05 20:46:21.427541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119ef30 (9): Bad file descriptor 00:26:28.124 [2024-12-05 20:46:21.427550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:28.124 [2024-12-05 20:46:21.427555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:28.124 [2024-12-05 20:46:21.427561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:28.124 [2024-12-05 20:46:21.427566] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:28.124 [2024-12-05 20:46:21.427570] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:28.124 [2024-12-05 20:46:21.427573] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:28.124 [2024-12-05 20:46:21.437308] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:28.124 [2024-12-05 20:46:21.437321] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:28.124 [2024-12-05 20:46:21.437324] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:28.124 [2024-12-05 20:46:21.437328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:28.124 [2024-12-05 20:46:21.437339] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:28.124 [2024-12-05 20:46:21.437495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.124 [2024-12-05 20:46:21.437505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119ef30 with addr=10.0.0.2, port=4420 00:26:28.124 [2024-12-05 20:46:21.437511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119ef30 is same with the state(6) to be set 00:26:28.124 [2024-12-05 20:46:21.437525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119ef30 (9): Bad file descriptor 00:26:28.124 [2024-12-05 20:46:21.437534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:28.124 [2024-12-05 20:46:21.437539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:28.124 [2024-12-05 20:46:21.437545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:28.124 [2024-12-05 20:46:21.437550] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:28.124 [2024-12-05 20:46:21.437554] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:28.124 [2024-12-05 20:46:21.437557] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:28.124 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.124 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:28.124 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:28.124 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:28.124 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:28.124 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:28.124 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:28.124 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:28.124 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:28.125 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:28.125 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.125 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:28.125 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.125 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:28.125 [2024-12-05 20:46:21.447369] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:28.125 [2024-12-05 20:46:21.447380] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:28.125 [2024-12-05 20:46:21.447384] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:28.125 [2024-12-05 20:46:21.447387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:28.125 [2024-12-05 20:46:21.447398] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:28.125 [2024-12-05 20:46:21.447547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.125 [2024-12-05 20:46:21.447558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119ef30 with addr=10.0.0.2, port=4420 00:26:28.125 [2024-12-05 20:46:21.447565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119ef30 is same with the state(6) to be set 00:26:28.125 [2024-12-05 20:46:21.447573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119ef30 (9): Bad file descriptor 00:26:28.125 [2024-12-05 20:46:21.447582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:28.125 [2024-12-05 20:46:21.447587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:28.125 [2024-12-05 20:46:21.447597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:28.125 [2024-12-05 20:46:21.447602] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:28.125 [2024-12-05 20:46:21.447606] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:28.125 [2024-12-05 20:46:21.447609] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:28.125 [2024-12-05 20:46:21.457428] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:28.125 [2024-12-05 20:46:21.457440] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:28.125 [2024-12-05 20:46:21.457444] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:28.125 [2024-12-05 20:46:21.457448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:28.125 [2024-12-05 20:46:21.457460] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:28.125 [2024-12-05 20:46:21.457612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.125 [2024-12-05 20:46:21.457622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119ef30 with addr=10.0.0.2, port=4420 00:26:28.125 [2024-12-05 20:46:21.457629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119ef30 is same with the state(6) to be set 00:26:28.125 [2024-12-05 20:46:21.457638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119ef30 (9): Bad file descriptor 00:26:28.125 [2024-12-05 20:46:21.457647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:28.125 [2024-12-05 20:46:21.457652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:28.125 [2024-12-05 20:46:21.457658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:28.125 [2024-12-05 20:46:21.457664] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:28.125 [2024-12-05 20:46:21.457668] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:28.125 [2024-12-05 20:46:21.457671] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:28.125 [2024-12-05 20:46:21.467490] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:28.125 [2024-12-05 20:46:21.467499] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:28.125 [2024-12-05 20:46:21.467503] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:28.125 [2024-12-05 20:46:21.467507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:28.125 [2024-12-05 20:46:21.467518] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:28.125 [2024-12-05 20:46:21.467746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.125 [2024-12-05 20:46:21.467757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119ef30 with addr=10.0.0.2, port=4420 00:26:28.125 [2024-12-05 20:46:21.467764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119ef30 is same with the state(6) to be set 00:26:28.125 [2024-12-05 20:46:21.467772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119ef30 (9): Bad file descriptor 00:26:28.125 [2024-12-05 20:46:21.467781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:28.125 [2024-12-05 20:46:21.467789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:28.125 [2024-12-05 20:46:21.467795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:28.125 [2024-12-05 20:46:21.467800] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:28.125 [2024-12-05 20:46:21.467804] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:28.125 [2024-12-05 20:46:21.467807] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:28.125 [2024-12-05 20:46:21.472111] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:28.125 [2024-12-05 20:46:21.472125] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:28.125 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.125 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:28.125 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:28.125 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:28.125 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:28.125 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:28.125 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:28.125 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:28.125 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:28.125 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:28.126 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:28.126 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.126 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:28.126 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.126 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:28.126 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.126 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:26:28.126 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:28.126 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:28.126 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:28.126 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:28.126 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:28.126 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:28.126 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:28.126 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:28.126 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:28.126 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:28.126 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:28.126 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.126 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.126 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.385 20:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.325 [2024-12-05 20:46:22.759288] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:29.325 [2024-12-05 20:46:22.759303] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:29.325 [2024-12-05 20:46:22.759312] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:29.584 [2024-12-05 20:46:22.845565] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:29.843 [2024-12-05 20:46:23.145871] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:26:29.844 [2024-12-05 20:46:23.146473] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x11d3130:1 started. 00:26:29.844 [2024-12-05 20:46:23.147983] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:29.844 [2024-12-05 20:46:23.148005] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.844 request: 00:26:29.844 { 00:26:29.844 "name": "nvme", 00:26:29.844 "trtype": "tcp", 00:26:29.844 "traddr": "10.0.0.2", 00:26:29.844 "adrfam": "ipv4", 00:26:29.844 "trsvcid": "8009", 00:26:29.844 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:29.844 "wait_for_attach": true, 00:26:29.844 "method": "bdev_nvme_start_discovery", 00:26:29.844 "req_id": 1 00:26:29.844 } 00:26:29.844 Got JSON-RPC error response 00:26:29.844 response: 00:26:29.844 { 00:26:29.844 "code": -17, 00:26:29.844 "message": "File exists" 00:26:29.844 } 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.844 [2024-12-05 20:46:23.189699] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x11d3130 was disconnected and freed. delete nvme_qpair. 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.844 request: 00:26:29.844 { 00:26:29.844 "name": "nvme_second", 00:26:29.844 "trtype": "tcp", 00:26:29.844 "traddr": "10.0.0.2", 00:26:29.844 "adrfam": "ipv4", 00:26:29.844 "trsvcid": "8009", 00:26:29.844 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:29.844 "wait_for_attach": true, 00:26:29.844 "method": "bdev_nvme_start_discovery", 00:26:29.844 "req_id": 1 00:26:29.844 } 00:26:29.844 Got JSON-RPC error response 00:26:29.844 response: 00:26:29.844 { 00:26:29.844 "code": -17, 00:26:29.844 "message": "File exists" 00:26:29.844 } 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.844 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:30.103 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.103 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:30.104 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:30.104 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:30.104 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:30.104 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.104 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:30.104 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.104 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:30.104 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.104 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:30.104 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:30.104 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:30.104 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:30.104 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:30.104 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:30.104 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:30.104 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:30.104 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:30.104 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.104 20:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.041 [2024-12-05 20:46:24.383423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-12-05 20:46:24.383448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d6b40 with addr=10.0.0.2, port=8010 00:26:31.041 [2024-12-05 20:46:24.383460] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:31.041 [2024-12-05 20:46:24.383466] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:31.041 [2024-12-05 20:46:24.383489] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:31.976 [2024-12-05 20:46:25.385770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-12-05 20:46:25.385792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d6b40 with addr=10.0.0.2, port=8010 00:26:31.976 [2024-12-05 20:46:25.385803] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:31.976 [2024-12-05 20:46:25.385808] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:31.976 [2024-12-05 20:46:25.385814] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:33.350 [2024-12-05 20:46:26.388020] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:33.350 request: 00:26:33.350 { 00:26:33.350 "name": "nvme_second", 00:26:33.350 "trtype": "tcp", 00:26:33.350 "traddr": "10.0.0.2", 00:26:33.350 "adrfam": "ipv4", 00:26:33.350 "trsvcid": "8010", 00:26:33.350 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:33.350 "wait_for_attach": false, 00:26:33.350 "attach_timeout_ms": 3000, 00:26:33.350 "method": "bdev_nvme_start_discovery", 00:26:33.350 "req_id": 1 00:26:33.350 } 00:26:33.350 Got JSON-RPC error response 00:26:33.350 response: 00:26:33.350 { 00:26:33.350 "code": -110, 00:26:33.350 "message": "Connection timed out" 00:26:33.350 } 00:26:33.350 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 477349 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:33.351 rmmod nvme_tcp 00:26:33.351 rmmod nvme_fabrics 00:26:33.351 rmmod nvme_keyring 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 477188 ']' 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 477188 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 477188 ']' 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 477188 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 477188 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 477188' 00:26:33.351 killing process with pid 477188 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 477188 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 477188 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.351 20:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.888 20:46:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:35.888 00:26:35.888 real 0m17.836s 00:26:35.888 user 0m21.153s 00:26:35.888 sys 0m5.895s 00:26:35.888 20:46:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:35.888 20:46:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:35.888 ************************************ 00:26:35.888 END TEST nvmf_host_discovery 00:26:35.888 ************************************ 00:26:35.888 20:46:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:35.888 20:46:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:35.888 20:46:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:35.888 20:46:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.888 ************************************ 00:26:35.888 START TEST nvmf_host_multipath_status 00:26:35.888 ************************************ 00:26:35.888 20:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:35.888 * Looking for test storage... 00:26:35.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:35.888 20:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:35.888 20:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:26:35.888 20:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:35.888 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:35.888 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:35.888 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:35.888 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:35.888 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:26:35.888 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:26:35.888 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:26:35.888 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:35.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.889 --rc genhtml_branch_coverage=1 00:26:35.889 --rc genhtml_function_coverage=1 00:26:35.889 --rc genhtml_legend=1 00:26:35.889 --rc geninfo_all_blocks=1 00:26:35.889 --rc geninfo_unexecuted_blocks=1 00:26:35.889 00:26:35.889 ' 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:35.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.889 --rc genhtml_branch_coverage=1 00:26:35.889 --rc genhtml_function_coverage=1 00:26:35.889 --rc genhtml_legend=1 00:26:35.889 --rc geninfo_all_blocks=1 00:26:35.889 --rc geninfo_unexecuted_blocks=1 00:26:35.889 00:26:35.889 ' 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:35.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.889 --rc genhtml_branch_coverage=1 00:26:35.889 --rc genhtml_function_coverage=1 00:26:35.889 --rc genhtml_legend=1 00:26:35.889 --rc geninfo_all_blocks=1 00:26:35.889 --rc geninfo_unexecuted_blocks=1 00:26:35.889 00:26:35.889 ' 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:35.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.889 --rc genhtml_branch_coverage=1 00:26:35.889 --rc genhtml_function_coverage=1 00:26:35.889 --rc genhtml_legend=1 00:26:35.889 --rc geninfo_all_blocks=1 00:26:35.889 --rc geninfo_unexecuted_blocks=1 00:26:35.889 00:26:35.889 ' 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:35.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:35.889 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:35.890 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:35.890 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:35.890 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:35.890 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:35.890 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:35.890 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:35.890 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:35.890 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:35.890 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:35.890 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:35.890 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:35.890 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:35.890 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.890 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:35.890 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.890 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:35.890 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:35.890 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:26:35.890 20:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:42.460 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:42.461 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:42.461 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:42.461 Found net devices under 0000:af:00.0: cvl_0_0 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:42.461 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:42.462 Found net devices under 0000:af:00.1: cvl_0_1 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:42.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:42.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:26:42.462 00:26:42.462 --- 10.0.0.2 ping statistics --- 00:26:42.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.462 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:42.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:42.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:26:42.462 00:26:42.462 --- 10.0.0.1 ping statistics --- 00:26:42.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.462 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:42.462 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:42.462 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=482686 00:26:42.462 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:42.462 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 482686 00:26:42.462 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 482686 ']' 00:26:42.462 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.462 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:42.462 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:42.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:42.463 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:42.463 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:42.463 [2024-12-05 20:46:35.053223] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:26:42.463 [2024-12-05 20:46:35.053261] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:42.463 [2024-12-05 20:46:35.130021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:42.463 [2024-12-05 20:46:35.169052] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:42.463 [2024-12-05 20:46:35.169091] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:42.463 [2024-12-05 20:46:35.169097] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:42.463 [2024-12-05 20:46:35.169102] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:42.463 [2024-12-05 20:46:35.169107] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:42.463 [2024-12-05 20:46:35.170256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.463 [2024-12-05 20:46:35.170257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.463 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:42.463 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:42.463 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:42.463 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:42.463 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:42.463 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:42.463 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=482686 00:26:42.463 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:42.720 [2024-12-05 20:46:36.060882] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:42.720 20:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:42.978 Malloc0 00:26:42.978 20:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:43.255 20:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:43.255 20:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:43.514 [2024-12-05 20:46:36.819625] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:43.514 20:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:43.772 [2024-12-05 20:46:37.008114] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:43.772 20:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:43.772 20:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=482979 00:26:43.772 20:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:43.772 20:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 482979 /var/tmp/bdevperf.sock 00:26:43.772 20:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 482979 ']' 00:26:43.772 20:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:43.772 20:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:43.772 20:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:43.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:43.772 20:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:43.772 20:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:44.030 20:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:44.030 20:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:44.030 20:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:44.030 20:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:44.597 Nvme0n1 00:26:44.597 20:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:44.871 Nvme0n1 00:26:44.871 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:44.871 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:46.776 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:46.776 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:47.035 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:47.293 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:48.230 20:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:48.231 20:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:48.231 20:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.231 20:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:48.490 20:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.490 20:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:48.490 20:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.490 20:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:48.749 20:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:48.749 20:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:48.749 20:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.749 20:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:48.749 20:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.749 20:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:48.749 20:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.749 20:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:49.008 20:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:49.008 20:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:49.008 20:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:49.008 20:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:49.268 20:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:49.268 20:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:49.268 20:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:49.268 20:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:49.526 20:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:49.526 20:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:49.526 20:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:49.526 20:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:49.785 20:46:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:50.721 20:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:50.721 20:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:50.721 20:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.721 20:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:50.980 20:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:50.980 20:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:50.980 20:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.980 20:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:51.239 20:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.239 20:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:51.239 20:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.239 20:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:51.498 20:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.498 20:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:51.498 20:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.498 20:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:51.498 20:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.498 20:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:51.498 20:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.498 20:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:51.756 20:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.756 20:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:51.756 20:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.756 20:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:52.014 20:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:52.014 20:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:52.014 20:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:52.271 20:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:52.271 20:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:53.652 20:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:53.652 20:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:53.652 20:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.652 20:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:53.652 20:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.652 20:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:53.652 20:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.652 20:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:53.910 20:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:53.911 20:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:53.911 20:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:53.911 20:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.911 20:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.911 20:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:53.911 20:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.911 20:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:54.169 20:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:54.169 20:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:54.169 20:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.169 20:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:54.427 20:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:54.427 20:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:54.427 20:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.427 20:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:54.685 20:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:54.685 20:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:54.685 20:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:54.685 20:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:54.945 20:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:55.881 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:55.881 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:55.881 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.881 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:56.140 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.140 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:56.140 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.140 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:56.399 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:56.399 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:56.399 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.399 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:56.657 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.657 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:56.657 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.657 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:56.657 20:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.657 20:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:56.657 20:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.658 20:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:56.916 20:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.916 20:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:56.916 20:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.916 20:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:57.175 20:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:57.175 20:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:57.175 20:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:57.433 20:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:57.433 20:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:58.808 20:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:58.808 20:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:58.808 20:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.808 20:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:58.808 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:58.808 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:58.808 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.808 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:58.808 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:58.808 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:58.808 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.808 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:59.067 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:59.067 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:59.067 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.067 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:59.332 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:59.332 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:59.332 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.332 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:59.595 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:59.595 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:59.595 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.595 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:59.595 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:59.595 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:59.595 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:59.854 20:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:00.112 20:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:01.049 20:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:01.049 20:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:01.049 20:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.049 20:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:01.308 20:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:01.308 20:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:01.308 20:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.308 20:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:01.309 20:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:01.309 20:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:01.309 20:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:01.309 20:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.569 20:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:01.569 20:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:01.569 20:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:01.569 20:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.830 20:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:01.830 20:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:01.830 20:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.830 20:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:02.088 20:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:02.089 20:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:02.089 20:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.089 20:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:02.089 20:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.089 20:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:02.347 20:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:02.347 20:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:02.605 20:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:02.864 20:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:03.800 20:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:03.800 20:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:03.800 20:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.800 20:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:04.058 20:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.058 20:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:04.058 20:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.058 20:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:04.317 20:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.317 20:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:04.317 20:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.317 20:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:04.317 20:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.317 20:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:04.317 20:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:04.317 20:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.575 20:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.575 20:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:04.575 20:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:04.575 20:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.833 20:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.833 20:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:04.833 20:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.833 20:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:05.090 20:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.090 20:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:05.090 20:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:05.090 20:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:05.347 20:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:06.280 20:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:06.280 20:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:06.280 20:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.280 20:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:06.538 20:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:06.538 20:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:06.538 20:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.538 20:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:06.797 20:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.797 20:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:06.797 20:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.797 20:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:07.054 20:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.054 20:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:07.054 20:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.055 20:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:07.055 20:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.055 20:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:07.055 20:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:07.055 20:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.329 20:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.329 20:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:07.329 20:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.329 20:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:07.587 20:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.587 20:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:07.587 20:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:07.846 20:47:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:07.846 20:47:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:09.220 20:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:09.220 20:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:09.220 20:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.220 20:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:09.220 20:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.220 20:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:09.220 20:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.220 20:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:09.220 20:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.220 20:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:09.220 20:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:09.220 20:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.479 20:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.479 20:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:09.479 20:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.479 20:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:09.739 20:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.739 20:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:09.739 20:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.739 20:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:09.997 20:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.998 20:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:09.998 20:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.998 20:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:09.998 20:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.998 20:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:09.998 20:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:10.256 20:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:10.516 20:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:11.454 20:47:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:11.454 20:47:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:11.454 20:47:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.454 20:47:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:11.713 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.713 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:11.713 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.713 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:11.971 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:11.971 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:11.971 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.971 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:11.971 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.971 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:11.971 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.971 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:12.229 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.229 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:12.229 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.229 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:12.489 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.489 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:12.489 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.489 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:12.747 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:12.747 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 482979 00:27:12.747 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 482979 ']' 00:27:12.748 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 482979 00:27:12.748 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:12.748 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:12.748 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 482979 00:27:12.748 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:27:12.748 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:27:12.748 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 482979' 00:27:12.748 killing process with pid 482979 00:27:12.748 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 482979 00:27:12.748 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 482979 00:27:12.748 { 00:27:12.748 "results": [ 00:27:12.748 { 00:27:12.748 "job": "Nvme0n1", 00:27:12.748 "core_mask": "0x4", 00:27:12.748 "workload": "verify", 00:27:12.748 "status": "terminated", 00:27:12.748 "verify_range": { 00:27:12.748 "start": 0, 00:27:12.748 "length": 16384 00:27:12.748 }, 00:27:12.748 "queue_depth": 128, 00:27:12.748 "io_size": 4096, 00:27:12.748 "runtime": 27.764825, 00:27:12.748 "iops": 11508.806556497295, 00:27:12.748 "mibps": 44.95627561131756, 00:27:12.748 "io_failed": 0, 00:27:12.748 "io_timeout": 0, 00:27:12.748 "avg_latency_us": 11104.215994856242, 00:27:12.748 "min_latency_us": 595.7818181818182, 00:27:12.748 "max_latency_us": 3019898.88 00:27:12.748 } 00:27:12.748 ], 00:27:12.748 "core_count": 1 00:27:12.748 } 00:27:13.010 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 482979 00:27:13.010 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:13.010 [2024-12-05 20:46:37.072357] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:27:13.010 [2024-12-05 20:46:37.072406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482979 ] 00:27:13.010 [2024-12-05 20:46:37.146600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.010 [2024-12-05 20:46:37.185691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:13.010 Running I/O for 90 seconds... 00:27:13.010 12527.00 IOPS, 48.93 MiB/s [2024-12-05T19:47:06.451Z] 12532.50 IOPS, 48.96 MiB/s [2024-12-05T19:47:06.451Z] 12516.33 IOPS, 48.89 MiB/s [2024-12-05T19:47:06.451Z] 12507.50 IOPS, 48.86 MiB/s [2024-12-05T19:47:06.451Z] 12511.20 IOPS, 48.87 MiB/s [2024-12-05T19:47:06.451Z] 12515.50 IOPS, 48.89 MiB/s [2024-12-05T19:47:06.451Z] 12479.29 IOPS, 48.75 MiB/s [2024-12-05T19:47:06.451Z] 12489.38 IOPS, 48.79 MiB/s [2024-12-05T19:47:06.451Z] 12496.44 IOPS, 48.81 MiB/s [2024-12-05T19:47:06.451Z] 12488.40 IOPS, 48.78 MiB/s [2024-12-05T19:47:06.451Z] 12515.00 IOPS, 48.89 MiB/s [2024-12-05T19:47:06.451Z] 12512.92 IOPS, 48.88 MiB/s [2024-12-05T19:47:06.451Z] [2024-12-05 20:46:50.632482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:58608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.010 [2024-12-05 20:46:50.632517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:13.010 [2024-12-05 20:46:50.632553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.010 [2024-12-05 20:46:50.632561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:13.010 [2024-12-05 20:46:50.632573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.010 [2024-12-05 20:46:50.632580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:13.010 [2024-12-05 20:46:50.632592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.010 [2024-12-05 20:46:50.632598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:13.010 [2024-12-05 20:46:50.632609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.010 [2024-12-05 20:46:50.632615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:13.010 [2024-12-05 20:46:50.632627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.010 [2024-12-05 20:46:50.632634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:13.010 [2024-12-05 20:46:50.632646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.010 [2024-12-05 20:46:50.632651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:13.010 [2024-12-05 20:46:50.632662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.010 [2024-12-05 20:46:50.632668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:13.010 [2024-12-05 20:46:50.632679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.010 [2024-12-05 20:46:50.632686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:13.010 [2024-12-05 20:46:50.632697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.010 [2024-12-05 20:46:50.632710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:13.010 [2024-12-05 20:46:50.632721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.010 [2024-12-05 20:46:50.632727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:13.010 [2024-12-05 20:46:50.632739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.011 [2024-12-05 20:46:50.632745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:13.011 [2024-12-05 20:46:50.632756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.011 [2024-12-05 20:46:50.632763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:13.011 [2024-12-05 20:46:50.632774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.011 [2024-12-05 20:46:50.632781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:13.011 [2024-12-05 20:46:50.632792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.011 [2024-12-05 20:46:50.632798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:13.011 [2024-12-05 20:46:50.632810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.011 [2024-12-05 20:46:50.632816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:13.011 [2024-12-05 20:46:50.632826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.011 [2024-12-05 20:46:50.632833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:13.011 [2024-12-05 20:46:50.632845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.011 [2024-12-05 20:46:50.632851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:13.011 [2024-12-05 20:46:50.632862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.011 [2024-12-05 20:46:50.632869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:13.011 [2024-12-05 20:46:50.632880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.011 [2024-12-05 20:46:50.632886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:13.011 [2024-12-05 20:46:50.632897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.011 [2024-12-05 20:46:50.632903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:13.011 [2024-12-05 20:46:50.632914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.011 [2024-12-05 20:46:50.632922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.011 [2024-12-05 20:46:50.632933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.011 [2024-12-05 20:46:50.632939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.011 [2024-12-05 20:46:50.632950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.011 [2024-12-05 20:46:50.632956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:13.011 [2024-12-05 20:46:50.632967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.011 [2024-12-05 20:46:50.632973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:13.011 [2024-12-05 20:46:50.632983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.011 [2024-12-05 20:46:50.632989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:13.011 [2024-12-05 20:46:50.633000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.011 [2024-12-05 20:46:50.633007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:13.011 [2024-12-05 20:46:50.633017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.011 [2024-12-05 20:46:50.633024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:13.011 [2024-12-05 20:46:50.633035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.011 [2024-12-05 20:46:50.633041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:13.011 [2024-12-05 20:46:50.633052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.011 [2024-12-05 20:46:50.633062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:13.011 [2024-12-05 20:46:50.633074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:58896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.011 [2024-12-05 20:46:50.633080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:13.011 [2024-12-05 20:46:50.633091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.011 [2024-12-05 20:46:50.633097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:13.011 [2024-12-05 20:46:50.633108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.011 [2024-12-05 20:46:50.633114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:13.011 [2024-12-05 20:46:50.633126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.011 [2024-12-05 20:46:50.633132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:13.011 [2024-12-05 20:46:50.633145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.011 [2024-12-05 20:46:50.633151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:13.012 [2024-12-05 20:46:50.633162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.012 [2024-12-05 20:46:50.633168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:13.012 [2024-12-05 20:46:50.633179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.012 [2024-12-05 20:46:50.633185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:13.012 [2024-12-05 20:46:50.633196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.012 [2024-12-05 20:46:50.633202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:13.012 [2024-12-05 20:46:50.633213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:58960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.012 [2024-12-05 20:46:50.633219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:13.012 [2024-12-05 20:46:50.633230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.012 [2024-12-05 20:46:50.633236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:13.012 [2024-12-05 20:46:50.633701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.012 [2024-12-05 20:46:50.633717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:13.012 [2024-12-05 20:46:50.633732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.012 [2024-12-05 20:46:50.633739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:13.012 [2024-12-05 20:46:50.633752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.012 [2024-12-05 20:46:50.633758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:13.012 [2024-12-05 20:46:50.633771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.012 [2024-12-05 20:46:50.633776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:13.012 [2024-12-05 20:46:50.633789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.012 [2024-12-05 20:46:50.633796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:13.012 [2024-12-05 20:46:50.633808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.012 [2024-12-05 20:46:50.633814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:13.012 [2024-12-05 20:46:50.633827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.012 [2024-12-05 20:46:50.633836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:13.012 [2024-12-05 20:46:50.633849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.012 [2024-12-05 20:46:50.633856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:13.012 [2024-12-05 20:46:50.633869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.012 [2024-12-05 20:46:50.633875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:13.012 [2024-12-05 20:46:50.633888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.012 [2024-12-05 20:46:50.633895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:13.012 [2024-12-05 20:46:50.633908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.012 [2024-12-05 20:46:50.633914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:13.012 [2024-12-05 20:46:50.633926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.012 [2024-12-05 20:46:50.633933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:13.012 [2024-12-05 20:46:50.633945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.012 [2024-12-05 20:46:50.633952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:13.012 [2024-12-05 20:46:50.633964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.012 [2024-12-05 20:46:50.633970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.012 [2024-12-05 20:46:50.633983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.012 [2024-12-05 20:46:50.633989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.012 [2024-12-05 20:46:50.634010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.012 [2024-12-05 20:46:50.634016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:13.012 [2024-12-05 20:46:50.634029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.012 [2024-12-05 20:46:50.634035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:13.012 [2024-12-05 20:46:50.634048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.012 [2024-12-05 20:46:50.634054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:13.012 [2024-12-05 20:46:50.634073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.013 [2024-12-05 20:46:50.634081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:13.013 [2024-12-05 20:46:50.634094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.013 [2024-12-05 20:46:50.634100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:13.013 [2024-12-05 20:46:50.634113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.013 [2024-12-05 20:46:50.634119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:13.013 [2024-12-05 20:46:50.634132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.013 [2024-12-05 20:46:50.634138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:13.013 [2024-12-05 20:46:50.634150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.013 [2024-12-05 20:46:50.634156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:13.013 [2024-12-05 20:46:50.634169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.013 [2024-12-05 20:46:50.634175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:13.013 [2024-12-05 20:46:50.634188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.013 [2024-12-05 20:46:50.634194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:13.013 [2024-12-05 20:46:50.634207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.013 [2024-12-05 20:46:50.634213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:13.013 [2024-12-05 20:46:50.634225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.013 [2024-12-05 20:46:50.634232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:13.013 [2024-12-05 20:46:50.634295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.013 [2024-12-05 20:46:50.634302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:13.013 [2024-12-05 20:46:50.634317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:58616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.013 [2024-12-05 20:46:50.634323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:13.013 [2024-12-05 20:46:50.634338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:58624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.013 [2024-12-05 20:46:50.634343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:13.013 [2024-12-05 20:46:50.634358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:58632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.013 [2024-12-05 20:46:50.634365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:13.013 [2024-12-05 20:46:50.634380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:58640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.013 [2024-12-05 20:46:50.634386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:13.013 [2024-12-05 20:46:50.634400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:58648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.013 [2024-12-05 20:46:50.634407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:13.013 [2024-12-05 20:46:50.634420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.013 [2024-12-05 20:46:50.634426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:13.013 [2024-12-05 20:46:50.634440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.013 [2024-12-05 20:46:50.634446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:13.013 [2024-12-05 20:46:50.634460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:59208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.013 [2024-12-05 20:46:50.634466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:13.013 [2024-12-05 20:46:50.634480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:59216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.013 [2024-12-05 20:46:50.634486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:13.013 [2024-12-05 20:46:50.634500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.013 [2024-12-05 20:46:50.634505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:13.013 [2024-12-05 20:46:50.634519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.013 [2024-12-05 20:46:50.634526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:13.013 [2024-12-05 20:46:50.634539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.013 [2024-12-05 20:46:50.634545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:13.013 [2024-12-05 20:46:50.634559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.013 [2024-12-05 20:46:50.634565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:13.013 [2024-12-05 20:46:50.634579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.014 [2024-12-05 20:46:50.634585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.014 [2024-12-05 20:46:50.634599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.014 [2024-12-05 20:46:50.634605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:13.014 [2024-12-05 20:46:50.634620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.014 [2024-12-05 20:46:50.634626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:13.014 [2024-12-05 20:46:50.634640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.014 [2024-12-05 20:46:50.634646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.014 [2024-12-05 20:46:50.634660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.014 [2024-12-05 20:46:50.634666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.014 [2024-12-05 20:46:50.634680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.014 [2024-12-05 20:46:50.634687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.014 [2024-12-05 20:46:50.634702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.014 [2024-12-05 20:46:50.634708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:13.014 [2024-12-05 20:46:50.634761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.014 [2024-12-05 20:46:50.634769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:13.014 [2024-12-05 20:46:50.634784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.014 [2024-12-05 20:46:50.634791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:13.014 [2024-12-05 20:46:50.634806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:59328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.014 [2024-12-05 20:46:50.634812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:13.014 [2024-12-05 20:46:50.634827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.014 [2024-12-05 20:46:50.634833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:13.014 [2024-12-05 20:46:50.634848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.014 [2024-12-05 20:46:50.634854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:13.014 [2024-12-05 20:46:50.634869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:59352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.014 [2024-12-05 20:46:50.634875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:13.014 [2024-12-05 20:46:50.634890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.014 [2024-12-05 20:46:50.634896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:13.014 [2024-12-05 20:46:50.634911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:59368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.014 [2024-12-05 20:46:50.634922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:13.014 [2024-12-05 20:46:50.634937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.014 [2024-12-05 20:46:50.634944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:13.014 [2024-12-05 20:46:50.634961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.014 [2024-12-05 20:46:50.634967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:13.014 [2024-12-05 20:46:50.634982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.014 [2024-12-05 20:46:50.634988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:13.014 [2024-12-05 20:46:50.635003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.014 [2024-12-05 20:46:50.635010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:13.014 [2024-12-05 20:46:50.635025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.014 [2024-12-05 20:46:50.635031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:13.014 [2024-12-05 20:46:50.635046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:59416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.014 [2024-12-05 20:46:50.635053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:13.014 [2024-12-05 20:46:50.635073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.014 [2024-12-05 20:46:50.635080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:13.014 [2024-12-05 20:46:50.635096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.014 [2024-12-05 20:46:50.635102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:13.014 [2024-12-05 20:46:50.635159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.014 [2024-12-05 20:46:50.635168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:13.014 [2024-12-05 20:46:50.635185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.015 [2024-12-05 20:46:50.635191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:13.015 [2024-12-05 20:46:50.635207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.015 [2024-12-05 20:46:50.635213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:13.015 [2024-12-05 20:46:50.635228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.015 [2024-12-05 20:46:50.635237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:13.015 [2024-12-05 20:46:50.635252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.015 [2024-12-05 20:46:50.635258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:13.015 [2024-12-05 20:46:50.635274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.015 [2024-12-05 20:46:50.635280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:13.015 [2024-12-05 20:46:50.635296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.015 [2024-12-05 20:46:50.635302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:13.015 [2024-12-05 20:46:50.635317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.015 [2024-12-05 20:46:50.635323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:13.015 [2024-12-05 20:46:50.635339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.015 [2024-12-05 20:46:50.635345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:13.015 [2024-12-05 20:46:50.635361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.015 [2024-12-05 20:46:50.635367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:13.015 [2024-12-05 20:46:50.635383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.015 [2024-12-05 20:46:50.635388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:13.015 [2024-12-05 20:46:50.635404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.015 [2024-12-05 20:46:50.635411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:13.015 [2024-12-05 20:46:50.635426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.015 [2024-12-05 20:46:50.635432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:13.015 [2024-12-05 20:46:50.635447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.015 [2024-12-05 20:46:50.635453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.015 [2024-12-05 20:46:50.635469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.015 [2024-12-05 20:46:50.635475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.015 [2024-12-05 20:46:50.635491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.015 [2024-12-05 20:46:50.635497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:13.015 [2024-12-05 20:46:50.635545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.015 [2024-12-05 20:46:50.635554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:13.015 [2024-12-05 20:46:50.635571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.015 [2024-12-05 20:46:50.635577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:13.015 [2024-12-05 20:46:50.635593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:59584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.015 [2024-12-05 20:46:50.635599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:13.015 [2024-12-05 20:46:50.635615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.015 [2024-12-05 20:46:50.635622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:13.015 [2024-12-05 20:46:50.635638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.015 [2024-12-05 20:46:50.635644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:13.015 [2024-12-05 20:46:50.635660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.015 [2024-12-05 20:46:50.635666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:13.015 [2024-12-05 20:46:50.635683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.015 [2024-12-05 20:46:50.635689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:13.015 [2024-12-05 20:46:50.635706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.015 [2024-12-05 20:46:50.635711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:13.015 11906.31 IOPS, 46.51 MiB/s [2024-12-05T19:47:06.456Z] 11055.86 IOPS, 43.19 MiB/s [2024-12-05T19:47:06.456Z] 10318.80 IOPS, 40.31 MiB/s [2024-12-05T19:47:06.456Z] 10163.12 IOPS, 39.70 MiB/s [2024-12-05T19:47:06.457Z] 10300.41 IOPS, 40.24 MiB/s [2024-12-05T19:47:06.457Z] 10462.94 IOPS, 40.87 MiB/s [2024-12-05T19:47:06.457Z] 10684.63 IOPS, 41.74 MiB/s [2024-12-05T19:47:06.457Z] 10879.30 IOPS, 42.50 MiB/s [2024-12-05T19:47:06.457Z] 10982.62 IOPS, 42.90 MiB/s [2024-12-05T19:47:06.457Z] 11044.95 IOPS, 43.14 MiB/s [2024-12-05T19:47:06.457Z] 11104.96 IOPS, 43.38 MiB/s [2024-12-05T19:47:06.457Z] 11243.62 IOPS, 43.92 MiB/s [2024-12-05T19:47:06.457Z] 11369.64 IOPS, 44.41 MiB/s [2024-12-05T19:47:06.457Z] [2024-12-05 20:47:03.769940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.016 [2024-12-05 20:47:03.769977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:13.016 [2024-12-05 20:47:03.770025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.016 [2024-12-05 20:47:03.770033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:13.016 [2024-12-05 20:47:03.770045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.016 [2024-12-05 20:47:03.770062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:13.016 [2024-12-05 20:47:03.770074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.016 [2024-12-05 20:47:03.770080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:13.016 [2024-12-05 20:47:03.770091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.016 [2024-12-05 20:47:03.770098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:13.016 [2024-12-05 20:47:03.770109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.016 [2024-12-05 20:47:03.770115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:13.016 [2024-12-05 20:47:03.772273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.016 [2024-12-05 20:47:03.772296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:13.016 [2024-12-05 20:47:03.772312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.016 [2024-12-05 20:47:03.772319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:13.016 [2024-12-05 20:47:03.772331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.016 [2024-12-05 20:47:03.772337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:13.016 [2024-12-05 20:47:03.772348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.016 [2024-12-05 20:47:03.772355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:13.016 [2024-12-05 20:47:03.772366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.016 [2024-12-05 20:47:03.772372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:13.016 [2024-12-05 20:47:03.772383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.016 [2024-12-05 20:47:03.772389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:13.016 [2024-12-05 20:47:03.772400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.016 [2024-12-05 20:47:03.772407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:13.016 [2024-12-05 20:47:03.772418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.016 [2024-12-05 20:47:03.772425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:13.016 [2024-12-05 20:47:03.772436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.016 [2024-12-05 20:47:03.772443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:13.016 [2024-12-05 20:47:03.772457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.016 [2024-12-05 20:47:03.772464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:13.016 11449.42 IOPS, 44.72 MiB/s [2024-12-05T19:47:06.457Z] 11484.81 IOPS, 44.86 MiB/s [2024-12-05T19:47:06.457Z] Received shutdown signal, test time was about 27.765428 seconds 00:27:13.016 00:27:13.016 Latency(us) 00:27:13.016 [2024-12-05T19:47:06.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:13.016 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:13.016 Verification LBA range: start 0x0 length 0x4000 00:27:13.016 Nvme0n1 : 27.76 11508.81 44.96 0.00 0.00 11104.22 595.78 3019898.88 00:27:13.016 [2024-12-05T19:47:06.457Z] =================================================================================================================== 00:27:13.016 [2024-12-05T19:47:06.457Z] Total : 11508.81 44.96 0.00 0.00 11104.22 595.78 3019898.88 00:27:13.016 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:13.016 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:13.016 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:13.017 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:13.017 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:13.017 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:27:13.017 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:13.017 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:27:13.017 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:13.017 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:13.017 rmmod nvme_tcp 00:27:13.276 rmmod nvme_fabrics 00:27:13.276 rmmod nvme_keyring 00:27:13.276 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:13.276 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:27:13.276 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:27:13.276 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 482686 ']' 00:27:13.276 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 482686 00:27:13.276 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 482686 ']' 00:27:13.276 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 482686 00:27:13.276 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:13.276 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:13.276 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 482686 00:27:13.276 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:13.276 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:13.276 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 482686' 00:27:13.276 killing process with pid 482686 00:27:13.276 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 482686 00:27:13.276 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 482686 00:27:13.535 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:13.535 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:13.535 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:13.535 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:27:13.535 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:27:13.535 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:13.535 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:27:13.535 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:13.535 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:13.535 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.535 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:13.535 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.440 20:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:15.440 00:27:15.440 real 0m39.948s 00:27:15.440 user 1m46.614s 00:27:15.440 sys 0m11.333s 00:27:15.440 20:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:15.440 20:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:15.440 ************************************ 00:27:15.440 END TEST nvmf_host_multipath_status 00:27:15.440 ************************************ 00:27:15.440 20:47:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:15.440 20:47:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:15.440 20:47:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:15.440 20:47:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.699 ************************************ 00:27:15.699 START TEST nvmf_discovery_remove_ifc 00:27:15.699 ************************************ 00:27:15.699 20:47:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:15.699 * Looking for test storage... 00:27:15.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:15.699 20:47:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:15.699 20:47:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:27:15.699 20:47:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:15.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.699 --rc genhtml_branch_coverage=1 00:27:15.699 --rc genhtml_function_coverage=1 00:27:15.699 --rc genhtml_legend=1 00:27:15.699 --rc geninfo_all_blocks=1 00:27:15.699 --rc geninfo_unexecuted_blocks=1 00:27:15.699 00:27:15.699 ' 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:15.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.699 --rc genhtml_branch_coverage=1 00:27:15.699 --rc genhtml_function_coverage=1 00:27:15.699 --rc genhtml_legend=1 00:27:15.699 --rc geninfo_all_blocks=1 00:27:15.699 --rc geninfo_unexecuted_blocks=1 00:27:15.699 00:27:15.699 ' 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:15.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.699 --rc genhtml_branch_coverage=1 00:27:15.699 --rc genhtml_function_coverage=1 00:27:15.699 --rc genhtml_legend=1 00:27:15.699 --rc geninfo_all_blocks=1 00:27:15.699 --rc geninfo_unexecuted_blocks=1 00:27:15.699 00:27:15.699 ' 00:27:15.699 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:15.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.700 --rc genhtml_branch_coverage=1 00:27:15.700 --rc genhtml_function_coverage=1 00:27:15.700 --rc genhtml_legend=1 00:27:15.700 --rc geninfo_all_blocks=1 00:27:15.700 --rc geninfo_unexecuted_blocks=1 00:27:15.700 00:27:15.700 ' 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:15.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:27:15.700 20:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:22.270 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:22.270 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:22.270 Found net devices under 0000:af:00.0: cvl_0_0 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.270 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:22.270 Found net devices under 0000:af:00.1: cvl_0_1 00:27:22.271 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.271 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:22.271 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:27:22.271 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:22.271 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:22.271 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:22.271 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:22.271 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:22.271 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:22.271 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:22.271 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:22.271 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:22.271 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:22.271 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:22.271 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:22.271 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:22.271 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:22.271 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:22.271 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:22.271 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:22.271 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:22.271 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:22.271 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:22.271 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:22.271 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:22.271 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:22.271 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:22.271 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:22.271 20:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:22.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:22.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:27:22.271 00:27:22.271 --- 10.0.0.2 ping statistics --- 00:27:22.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.271 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:27:22.271 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:22.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:22.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:27:22.271 00:27:22.271 --- 10.0.0.1 ping statistics --- 00:27:22.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.271 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:27:22.271 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:22.271 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:27:22.271 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:22.271 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:22.271 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:22.271 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:22.271 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:22.271 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:22.271 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:22.271 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:22.271 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:22.271 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:22.271 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:22.271 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=491972 00:27:22.271 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 491972 00:27:22.271 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:22.271 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 491972 ']' 00:27:22.271 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.271 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:22.271 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.271 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:22.271 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:22.271 [2024-12-05 20:47:15.117993] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:27:22.271 [2024-12-05 20:47:15.118041] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:22.271 [2024-12-05 20:47:15.194429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.271 [2024-12-05 20:47:15.232725] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:22.271 [2024-12-05 20:47:15.232760] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:22.271 [2024-12-05 20:47:15.232767] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:22.271 [2024-12-05 20:47:15.232772] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:22.271 [2024-12-05 20:47:15.232776] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:22.271 [2024-12-05 20:47:15.233391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.530 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:22.530 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:22.530 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:22.530 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:22.530 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:22.530 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:22.530 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:22.530 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.530 20:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:22.789 [2024-12-05 20:47:15.978050] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:22.789 [2024-12-05 20:47:15.986247] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:22.789 null0 00:27:22.789 [2024-12-05 20:47:16.018207] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:22.789 20:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.789 20:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=492162 00:27:22.789 20:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:22.789 20:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 492162 /tmp/host.sock 00:27:22.789 20:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 492162 ']' 00:27:22.789 20:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:27:22.789 20:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:22.789 20:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:22.789 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:22.789 20:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:22.789 20:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:22.789 [2024-12-05 20:47:16.088779] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:27:22.789 [2024-12-05 20:47:16.088817] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid492162 ] 00:27:22.789 [2024-12-05 20:47:16.161913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.789 [2024-12-05 20:47:16.202137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.726 20:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:23.726 20:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:23.726 20:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:23.726 20:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:23.726 20:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.726 20:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:23.726 20:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.726 20:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:23.726 20:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.726 20:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:23.726 20:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.726 20:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:23.726 20:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.726 20:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:24.660 [2024-12-05 20:47:18.026465] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:24.660 [2024-12-05 20:47:18.026483] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:24.660 [2024-12-05 20:47:18.026497] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:24.918 [2024-12-05 20:47:18.152882] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:24.918 [2024-12-05 20:47:18.287637] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:27:24.918 [2024-12-05 20:47:18.288388] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x200e9d0:1 started. 00:27:24.918 [2024-12-05 20:47:18.289609] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:24.918 [2024-12-05 20:47:18.289644] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:24.918 [2024-12-05 20:47:18.289662] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:24.918 [2024-12-05 20:47:18.289672] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:24.918 [2024-12-05 20:47:18.289687] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:24.918 20:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.918 20:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:24.918 20:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:24.918 [2024-12-05 20:47:18.295160] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x200e9d0 was disconnected and freed. delete nvme_qpair. 00:27:24.918 20:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:24.918 20:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:24.918 20:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.919 20:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:24.919 20:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:24.919 20:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:24.919 20:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.919 20:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:24.919 20:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:24.919 20:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:25.178 20:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:25.178 20:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:25.178 20:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:25.178 20:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:25.178 20:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.178 20:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:25.178 20:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:25.178 20:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:25.178 20:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.178 20:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:25.178 20:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:26.270 20:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:26.270 20:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.270 20:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:26.270 20:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.270 20:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:26.270 20:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:26.270 20:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:26.270 20:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.270 20:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:26.270 20:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:27.261 20:47:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:27.261 20:47:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:27.261 20:47:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:27.261 20:47:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.261 20:47:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:27.261 20:47:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:27.261 20:47:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:27.261 20:47:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.261 20:47:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:27.261 20:47:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:28.280 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:28.280 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:28.280 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:28.280 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.280 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:28.280 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.280 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:28.280 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.280 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:28.281 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:29.298 20:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:29.298 20:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:29.298 20:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:29.298 20:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.298 20:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:29.298 20:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:29.298 20:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:29.298 20:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.298 20:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:29.298 20:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:30.676 20:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:30.676 20:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:30.676 20:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:30.676 20:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.676 20:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:30.676 20:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:30.676 20:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:30.676 20:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.676 [2024-12-05 20:47:23.731267] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:30.676 [2024-12-05 20:47:23.731312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.676 [2024-12-05 20:47:23.731322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.676 [2024-12-05 20:47:23.731347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.676 [2024-12-05 20:47:23.731353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.676 [2024-12-05 20:47:23.731359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.676 [2024-12-05 20:47:23.731365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.676 [2024-12-05 20:47:23.731371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.676 [2024-12-05 20:47:23.731377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.676 [2024-12-05 20:47:23.731384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.676 [2024-12-05 20:47:23.731390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.676 [2024-12-05 20:47:23.731396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb210 is same with the state(6) to be set 00:27:30.676 [2024-12-05 20:47:23.741290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feb210 (9): Bad file descriptor 00:27:30.676 [2024-12-05 20:47:23.751324] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:30.676 [2024-12-05 20:47:23.751336] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:30.676 [2024-12-05 20:47:23.751342] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:30.676 [2024-12-05 20:47:23.751346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:30.676 20:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:30.676 [2024-12-05 20:47:23.751368] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:30.676 20:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:31.610 20:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:31.610 20:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:31.610 20:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:31.610 20:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.610 20:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:31.610 20:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:31.610 20:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:31.610 [2024-12-05 20:47:24.780091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:31.610 [2024-12-05 20:47:24.780162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feb210 with addr=10.0.0.2, port=4420 00:27:31.610 [2024-12-05 20:47:24.780193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb210 is same with the state(6) to be set 00:27:31.610 [2024-12-05 20:47:24.780241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feb210 (9): Bad file descriptor 00:27:31.610 [2024-12-05 20:47:24.781190] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:27:31.610 [2024-12-05 20:47:24.781253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:31.610 [2024-12-05 20:47:24.781278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:31.610 [2024-12-05 20:47:24.781300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:31.610 [2024-12-05 20:47:24.781321] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:31.610 [2024-12-05 20:47:24.781335] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:31.610 [2024-12-05 20:47:24.781349] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:31.611 [2024-12-05 20:47:24.781370] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:31.611 [2024-12-05 20:47:24.781384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:31.611 20:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.611 20:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:31.611 20:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:32.546 [2024-12-05 20:47:25.783899] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:32.546 [2024-12-05 20:47:25.783918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:32.546 [2024-12-05 20:47:25.783929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:32.546 [2024-12-05 20:47:25.783935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:32.546 [2024-12-05 20:47:25.783941] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:27:32.546 [2024-12-05 20:47:25.783947] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:32.546 [2024-12-05 20:47:25.783951] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:32.546 [2024-12-05 20:47:25.783955] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:32.546 [2024-12-05 20:47:25.783972] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:32.546 [2024-12-05 20:47:25.783991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:32.546 [2024-12-05 20:47:25.783999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.546 [2024-12-05 20:47:25.784008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:32.546 [2024-12-05 20:47:25.784014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.546 [2024-12-05 20:47:25.784021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:32.546 [2024-12-05 20:47:25.784027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.546 [2024-12-05 20:47:25.784033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:32.546 [2024-12-05 20:47:25.784039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.546 [2024-12-05 20:47:25.784049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:32.546 [2024-12-05 20:47:25.784055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.546 [2024-12-05 20:47:25.784066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:27:32.546 [2024-12-05 20:47:25.784361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fda930 (9): Bad file descriptor 00:27:32.546 [2024-12-05 20:47:25.785372] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:32.546 [2024-12-05 20:47:25.785382] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:27:32.546 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:32.546 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:32.546 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:32.546 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.546 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:32.546 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:32.546 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:32.546 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.546 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:32.546 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:32.546 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:32.546 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:32.546 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:32.546 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:32.546 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:32.546 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.546 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:32.546 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:32.546 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:32.546 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.805 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:32.805 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:33.741 20:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:33.741 20:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:33.741 20:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:33.741 20:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.741 20:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:33.741 20:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:33.741 20:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:33.741 20:47:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.741 20:47:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:33.741 20:47:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:34.679 [2024-12-05 20:47:27.841505] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:34.679 [2024-12-05 20:47:27.841521] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:34.679 [2024-12-05 20:47:27.841532] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:34.679 [2024-12-05 20:47:27.968906] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:34.679 20:47:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:34.679 20:47:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:34.679 20:47:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:34.679 20:47:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.679 20:47:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:34.679 20:47:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:34.679 20:47:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:34.679 20:47:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.679 [2024-12-05 20:47:28.070662] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:27:34.679 [2024-12-05 20:47:28.071205] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1ff4750:1 started. 00:27:34.679 [2024-12-05 20:47:28.072150] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:34.679 [2024-12-05 20:47:28.072179] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:34.679 [2024-12-05 20:47:28.072194] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:34.679 [2024-12-05 20:47:28.072205] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:34.679 [2024-12-05 20:47:28.072211] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:34.679 [2024-12-05 20:47:28.080129] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1ff4750 was disconnected and freed. delete nvme_qpair. 00:27:34.679 20:47:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:34.679 20:47:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 492162 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 492162 ']' 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 492162 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 492162 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 492162' 00:27:36.054 killing process with pid 492162 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 492162 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 492162 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:36.054 rmmod nvme_tcp 00:27:36.054 rmmod nvme_fabrics 00:27:36.054 rmmod nvme_keyring 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 491972 ']' 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 491972 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 491972 ']' 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 491972 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 491972 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 491972' 00:27:36.054 killing process with pid 491972 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 491972 00:27:36.054 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 491972 00:27:36.314 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:36.314 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:36.314 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:36.314 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:27:36.314 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:27:36.314 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:36.314 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:27:36.314 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:36.314 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:36.314 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.314 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:36.314 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.851 20:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:38.851 00:27:38.851 real 0m22.810s 00:27:38.851 user 0m28.936s 00:27:38.851 sys 0m5.987s 00:27:38.851 20:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:38.851 20:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:38.851 ************************************ 00:27:38.851 END TEST nvmf_discovery_remove_ifc 00:27:38.851 ************************************ 00:27:38.851 20:47:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:38.851 20:47:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:38.851 20:47:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:38.851 20:47:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.851 ************************************ 00:27:38.851 START TEST nvmf_identify_kernel_target 00:27:38.851 ************************************ 00:27:38.851 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:38.851 * Looking for test storage... 00:27:38.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:38.851 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:38.851 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:27:38.851 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:38.851 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:38.851 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:38.851 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:38.851 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:38.851 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:27:38.851 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:27:38.851 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:27:38.851 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:27:38.851 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:27:38.851 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:27:38.851 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:27:38.851 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:38.851 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:27:38.851 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:27:38.851 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:38.851 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:38.851 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:27:38.851 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:27:38.851 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:38.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.852 --rc genhtml_branch_coverage=1 00:27:38.852 --rc genhtml_function_coverage=1 00:27:38.852 --rc genhtml_legend=1 00:27:38.852 --rc geninfo_all_blocks=1 00:27:38.852 --rc geninfo_unexecuted_blocks=1 00:27:38.852 00:27:38.852 ' 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:38.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.852 --rc genhtml_branch_coverage=1 00:27:38.852 --rc genhtml_function_coverage=1 00:27:38.852 --rc genhtml_legend=1 00:27:38.852 --rc geninfo_all_blocks=1 00:27:38.852 --rc geninfo_unexecuted_blocks=1 00:27:38.852 00:27:38.852 ' 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:38.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.852 --rc genhtml_branch_coverage=1 00:27:38.852 --rc genhtml_function_coverage=1 00:27:38.852 --rc genhtml_legend=1 00:27:38.852 --rc geninfo_all_blocks=1 00:27:38.852 --rc geninfo_unexecuted_blocks=1 00:27:38.852 00:27:38.852 ' 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:38.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.852 --rc genhtml_branch_coverage=1 00:27:38.852 --rc genhtml_function_coverage=1 00:27:38.852 --rc genhtml_legend=1 00:27:38.852 --rc geninfo_all_blocks=1 00:27:38.852 --rc geninfo_unexecuted_blocks=1 00:27:38.852 00:27:38.852 ' 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:38.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:27:38.852 20:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:45.425 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:45.425 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:27:45.425 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:45.425 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:45.425 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:45.425 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:45.425 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:45.425 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:27:45.425 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:45.425 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:27:45.425 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:27:45.425 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:27:45.425 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:27:45.425 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:27:45.425 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:27:45.425 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:45.425 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:45.425 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:45.425 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:45.425 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:45.425 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:45.425 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:45.425 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:45.425 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:45.425 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:45.426 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:45.426 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:45.426 Found net devices under 0000:af:00.0: cvl_0_0 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:45.426 Found net devices under 0000:af:00.1: cvl_0_1 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:45.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:45.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:27:45.426 00:27:45.426 --- 10.0.0.2 ping statistics --- 00:27:45.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.426 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:45.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:45.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:27:45.426 00:27:45.426 --- 10.0.0.1 ping statistics --- 00:27:45.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.426 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:45.426 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:27:45.427 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.427 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.427 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.427 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.427 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.427 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.427 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.427 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.427 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.427 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:45.427 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:45.427 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:45.427 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:45.427 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:45.427 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:45.427 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:45.427 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:27:45.427 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:45.427 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:45.427 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:45.427 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:47.329 Waiting for block devices as requested 00:27:47.329 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:27:47.586 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:47.586 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:47.586 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:47.844 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:47.844 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:47.844 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:48.103 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:48.103 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:48.103 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:48.361 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:48.361 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:48.361 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:48.361 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:48.621 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:48.621 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:48.621 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:48.880 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:48.880 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:48.880 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:48.880 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:48.880 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:48.880 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:48.880 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:48.880 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:48.880 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:48.880 No valid GPT data, bailing 00:27:48.880 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:48.880 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:48.880 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:48.880 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:48.880 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:48.880 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:48.880 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:48.880 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:48.880 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:48.880 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:27:48.880 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:48.880 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:27:48.880 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:48.880 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:27:48.880 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:27:48.880 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:27:48.880 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:48.880 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:27:48.880 00:27:48.880 Discovery Log Number of Records 2, Generation counter 2 00:27:48.880 =====Discovery Log Entry 0====== 00:27:48.880 trtype: tcp 00:27:48.880 adrfam: ipv4 00:27:48.880 subtype: current discovery subsystem 00:27:48.880 treq: not specified, sq flow control disable supported 00:27:48.880 portid: 1 00:27:48.880 trsvcid: 4420 00:27:48.880 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:48.880 traddr: 10.0.0.1 00:27:48.880 eflags: none 00:27:48.880 sectype: none 00:27:48.880 =====Discovery Log Entry 1====== 00:27:48.880 trtype: tcp 00:27:48.880 adrfam: ipv4 00:27:48.880 subtype: nvme subsystem 00:27:48.880 treq: not specified, sq flow control disable supported 00:27:48.880 portid: 1 00:27:48.880 trsvcid: 4420 00:27:48.880 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:48.880 traddr: 10.0.0.1 00:27:48.880 eflags: none 00:27:48.880 sectype: none 00:27:48.880 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:48.880 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:49.141 ===================================================== 00:27:49.141 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:49.141 ===================================================== 00:27:49.141 Controller Capabilities/Features 00:27:49.141 ================================ 00:27:49.141 Vendor ID: 0000 00:27:49.141 Subsystem Vendor ID: 0000 00:27:49.141 Serial Number: 1d2651c5c88bec612ddb 00:27:49.141 Model Number: Linux 00:27:49.141 Firmware Version: 6.8.9-20 00:27:49.141 Recommended Arb Burst: 0 00:27:49.141 IEEE OUI Identifier: 00 00 00 00:27:49.141 Multi-path I/O 00:27:49.141 May have multiple subsystem ports: No 00:27:49.141 May have multiple controllers: No 00:27:49.141 Associated with SR-IOV VF: No 00:27:49.141 Max Data Transfer Size: Unlimited 00:27:49.141 Max Number of Namespaces: 0 00:27:49.141 Max Number of I/O Queues: 1024 00:27:49.141 NVMe Specification Version (VS): 1.3 00:27:49.141 NVMe Specification Version (Identify): 1.3 00:27:49.141 Maximum Queue Entries: 1024 00:27:49.141 Contiguous Queues Required: No 00:27:49.141 Arbitration Mechanisms Supported 00:27:49.141 Weighted Round Robin: Not Supported 00:27:49.141 Vendor Specific: Not Supported 00:27:49.141 Reset Timeout: 7500 ms 00:27:49.141 Doorbell Stride: 4 bytes 00:27:49.141 NVM Subsystem Reset: Not Supported 00:27:49.141 Command Sets Supported 00:27:49.141 NVM Command Set: Supported 00:27:49.141 Boot Partition: Not Supported 00:27:49.141 Memory Page Size Minimum: 4096 bytes 00:27:49.141 Memory Page Size Maximum: 4096 bytes 00:27:49.141 Persistent Memory Region: Not Supported 00:27:49.141 Optional Asynchronous Events Supported 00:27:49.141 Namespace Attribute Notices: Not Supported 00:27:49.141 Firmware Activation Notices: Not Supported 00:27:49.141 ANA Change Notices: Not Supported 00:27:49.141 PLE Aggregate Log Change Notices: Not Supported 00:27:49.141 LBA Status Info Alert Notices: Not Supported 00:27:49.141 EGE Aggregate Log Change Notices: Not Supported 00:27:49.141 Normal NVM Subsystem Shutdown event: Not Supported 00:27:49.141 Zone Descriptor Change Notices: Not Supported 00:27:49.141 Discovery Log Change Notices: Supported 00:27:49.141 Controller Attributes 00:27:49.141 128-bit Host Identifier: Not Supported 00:27:49.141 Non-Operational Permissive Mode: Not Supported 00:27:49.141 NVM Sets: Not Supported 00:27:49.141 Read Recovery Levels: Not Supported 00:27:49.141 Endurance Groups: Not Supported 00:27:49.141 Predictable Latency Mode: Not Supported 00:27:49.141 Traffic Based Keep ALive: Not Supported 00:27:49.141 Namespace Granularity: Not Supported 00:27:49.141 SQ Associations: Not Supported 00:27:49.141 UUID List: Not Supported 00:27:49.141 Multi-Domain Subsystem: Not Supported 00:27:49.141 Fixed Capacity Management: Not Supported 00:27:49.141 Variable Capacity Management: Not Supported 00:27:49.141 Delete Endurance Group: Not Supported 00:27:49.141 Delete NVM Set: Not Supported 00:27:49.141 Extended LBA Formats Supported: Not Supported 00:27:49.141 Flexible Data Placement Supported: Not Supported 00:27:49.141 00:27:49.141 Controller Memory Buffer Support 00:27:49.141 ================================ 00:27:49.141 Supported: No 00:27:49.141 00:27:49.141 Persistent Memory Region Support 00:27:49.141 ================================ 00:27:49.141 Supported: No 00:27:49.141 00:27:49.141 Admin Command Set Attributes 00:27:49.141 ============================ 00:27:49.141 Security Send/Receive: Not Supported 00:27:49.141 Format NVM: Not Supported 00:27:49.141 Firmware Activate/Download: Not Supported 00:27:49.141 Namespace Management: Not Supported 00:27:49.141 Device Self-Test: Not Supported 00:27:49.141 Directives: Not Supported 00:27:49.141 NVMe-MI: Not Supported 00:27:49.141 Virtualization Management: Not Supported 00:27:49.141 Doorbell Buffer Config: Not Supported 00:27:49.141 Get LBA Status Capability: Not Supported 00:27:49.141 Command & Feature Lockdown Capability: Not Supported 00:27:49.141 Abort Command Limit: 1 00:27:49.141 Async Event Request Limit: 1 00:27:49.141 Number of Firmware Slots: N/A 00:27:49.141 Firmware Slot 1 Read-Only: N/A 00:27:49.141 Firmware Activation Without Reset: N/A 00:27:49.141 Multiple Update Detection Support: N/A 00:27:49.141 Firmware Update Granularity: No Information Provided 00:27:49.141 Per-Namespace SMART Log: No 00:27:49.141 Asymmetric Namespace Access Log Page: Not Supported 00:27:49.141 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:49.141 Command Effects Log Page: Not Supported 00:27:49.141 Get Log Page Extended Data: Supported 00:27:49.141 Telemetry Log Pages: Not Supported 00:27:49.141 Persistent Event Log Pages: Not Supported 00:27:49.141 Supported Log Pages Log Page: May Support 00:27:49.141 Commands Supported & Effects Log Page: Not Supported 00:27:49.141 Feature Identifiers & Effects Log Page:May Support 00:27:49.141 NVMe-MI Commands & Effects Log Page: May Support 00:27:49.141 Data Area 4 for Telemetry Log: Not Supported 00:27:49.141 Error Log Page Entries Supported: 1 00:27:49.141 Keep Alive: Not Supported 00:27:49.141 00:27:49.141 NVM Command Set Attributes 00:27:49.141 ========================== 00:27:49.141 Submission Queue Entry Size 00:27:49.141 Max: 1 00:27:49.141 Min: 1 00:27:49.141 Completion Queue Entry Size 00:27:49.141 Max: 1 00:27:49.141 Min: 1 00:27:49.141 Number of Namespaces: 0 00:27:49.141 Compare Command: Not Supported 00:27:49.141 Write Uncorrectable Command: Not Supported 00:27:49.142 Dataset Management Command: Not Supported 00:27:49.142 Write Zeroes Command: Not Supported 00:27:49.142 Set Features Save Field: Not Supported 00:27:49.142 Reservations: Not Supported 00:27:49.142 Timestamp: Not Supported 00:27:49.142 Copy: Not Supported 00:27:49.142 Volatile Write Cache: Not Present 00:27:49.142 Atomic Write Unit (Normal): 1 00:27:49.142 Atomic Write Unit (PFail): 1 00:27:49.142 Atomic Compare & Write Unit: 1 00:27:49.142 Fused Compare & Write: Not Supported 00:27:49.142 Scatter-Gather List 00:27:49.142 SGL Command Set: Supported 00:27:49.142 SGL Keyed: Not Supported 00:27:49.142 SGL Bit Bucket Descriptor: Not Supported 00:27:49.142 SGL Metadata Pointer: Not Supported 00:27:49.142 Oversized SGL: Not Supported 00:27:49.142 SGL Metadata Address: Not Supported 00:27:49.142 SGL Offset: Supported 00:27:49.142 Transport SGL Data Block: Not Supported 00:27:49.142 Replay Protected Memory Block: Not Supported 00:27:49.142 00:27:49.142 Firmware Slot Information 00:27:49.142 ========================= 00:27:49.142 Active slot: 0 00:27:49.142 00:27:49.142 00:27:49.142 Error Log 00:27:49.142 ========= 00:27:49.142 00:27:49.142 Active Namespaces 00:27:49.142 ================= 00:27:49.142 Discovery Log Page 00:27:49.142 ================== 00:27:49.142 Generation Counter: 2 00:27:49.142 Number of Records: 2 00:27:49.142 Record Format: 0 00:27:49.142 00:27:49.142 Discovery Log Entry 0 00:27:49.142 ---------------------- 00:27:49.142 Transport Type: 3 (TCP) 00:27:49.142 Address Family: 1 (IPv4) 00:27:49.142 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:49.142 Entry Flags: 00:27:49.142 Duplicate Returned Information: 0 00:27:49.142 Explicit Persistent Connection Support for Discovery: 0 00:27:49.142 Transport Requirements: 00:27:49.142 Secure Channel: Not Specified 00:27:49.142 Port ID: 1 (0x0001) 00:27:49.142 Controller ID: 65535 (0xffff) 00:27:49.142 Admin Max SQ Size: 32 00:27:49.142 Transport Service Identifier: 4420 00:27:49.142 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:49.142 Transport Address: 10.0.0.1 00:27:49.142 Discovery Log Entry 1 00:27:49.142 ---------------------- 00:27:49.142 Transport Type: 3 (TCP) 00:27:49.142 Address Family: 1 (IPv4) 00:27:49.142 Subsystem Type: 2 (NVM Subsystem) 00:27:49.142 Entry Flags: 00:27:49.142 Duplicate Returned Information: 0 00:27:49.142 Explicit Persistent Connection Support for Discovery: 0 00:27:49.142 Transport Requirements: 00:27:49.142 Secure Channel: Not Specified 00:27:49.142 Port ID: 1 (0x0001) 00:27:49.142 Controller ID: 65535 (0xffff) 00:27:49.142 Admin Max SQ Size: 32 00:27:49.142 Transport Service Identifier: 4420 00:27:49.142 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:49.142 Transport Address: 10.0.0.1 00:27:49.142 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:49.142 get_feature(0x01) failed 00:27:49.142 get_feature(0x02) failed 00:27:49.142 get_feature(0x04) failed 00:27:49.142 ===================================================== 00:27:49.142 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:49.142 ===================================================== 00:27:49.142 Controller Capabilities/Features 00:27:49.142 ================================ 00:27:49.142 Vendor ID: 0000 00:27:49.142 Subsystem Vendor ID: 0000 00:27:49.142 Serial Number: 39950c33d1549eece5d0 00:27:49.142 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:49.142 Firmware Version: 6.8.9-20 00:27:49.142 Recommended Arb Burst: 6 00:27:49.142 IEEE OUI Identifier: 00 00 00 00:27:49.142 Multi-path I/O 00:27:49.142 May have multiple subsystem ports: Yes 00:27:49.142 May have multiple controllers: Yes 00:27:49.142 Associated with SR-IOV VF: No 00:27:49.142 Max Data Transfer Size: Unlimited 00:27:49.142 Max Number of Namespaces: 1024 00:27:49.142 Max Number of I/O Queues: 128 00:27:49.142 NVMe Specification Version (VS): 1.3 00:27:49.142 NVMe Specification Version (Identify): 1.3 00:27:49.142 Maximum Queue Entries: 1024 00:27:49.142 Contiguous Queues Required: No 00:27:49.142 Arbitration Mechanisms Supported 00:27:49.142 Weighted Round Robin: Not Supported 00:27:49.142 Vendor Specific: Not Supported 00:27:49.142 Reset Timeout: 7500 ms 00:27:49.142 Doorbell Stride: 4 bytes 00:27:49.142 NVM Subsystem Reset: Not Supported 00:27:49.142 Command Sets Supported 00:27:49.142 NVM Command Set: Supported 00:27:49.142 Boot Partition: Not Supported 00:27:49.142 Memory Page Size Minimum: 4096 bytes 00:27:49.142 Memory Page Size Maximum: 4096 bytes 00:27:49.142 Persistent Memory Region: Not Supported 00:27:49.142 Optional Asynchronous Events Supported 00:27:49.142 Namespace Attribute Notices: Supported 00:27:49.142 Firmware Activation Notices: Not Supported 00:27:49.142 ANA Change Notices: Supported 00:27:49.142 PLE Aggregate Log Change Notices: Not Supported 00:27:49.142 LBA Status Info Alert Notices: Not Supported 00:27:49.142 EGE Aggregate Log Change Notices: Not Supported 00:27:49.142 Normal NVM Subsystem Shutdown event: Not Supported 00:27:49.142 Zone Descriptor Change Notices: Not Supported 00:27:49.142 Discovery Log Change Notices: Not Supported 00:27:49.142 Controller Attributes 00:27:49.142 128-bit Host Identifier: Supported 00:27:49.142 Non-Operational Permissive Mode: Not Supported 00:27:49.142 NVM Sets: Not Supported 00:27:49.142 Read Recovery Levels: Not Supported 00:27:49.142 Endurance Groups: Not Supported 00:27:49.142 Predictable Latency Mode: Not Supported 00:27:49.142 Traffic Based Keep ALive: Supported 00:27:49.142 Namespace Granularity: Not Supported 00:27:49.142 SQ Associations: Not Supported 00:27:49.142 UUID List: Not Supported 00:27:49.142 Multi-Domain Subsystem: Not Supported 00:27:49.142 Fixed Capacity Management: Not Supported 00:27:49.142 Variable Capacity Management: Not Supported 00:27:49.142 Delete Endurance Group: Not Supported 00:27:49.142 Delete NVM Set: Not Supported 00:27:49.142 Extended LBA Formats Supported: Not Supported 00:27:49.142 Flexible Data Placement Supported: Not Supported 00:27:49.142 00:27:49.142 Controller Memory Buffer Support 00:27:49.142 ================================ 00:27:49.142 Supported: No 00:27:49.142 00:27:49.142 Persistent Memory Region Support 00:27:49.142 ================================ 00:27:49.142 Supported: No 00:27:49.142 00:27:49.142 Admin Command Set Attributes 00:27:49.142 ============================ 00:27:49.142 Security Send/Receive: Not Supported 00:27:49.142 Format NVM: Not Supported 00:27:49.142 Firmware Activate/Download: Not Supported 00:27:49.142 Namespace Management: Not Supported 00:27:49.142 Device Self-Test: Not Supported 00:27:49.142 Directives: Not Supported 00:27:49.142 NVMe-MI: Not Supported 00:27:49.142 Virtualization Management: Not Supported 00:27:49.142 Doorbell Buffer Config: Not Supported 00:27:49.142 Get LBA Status Capability: Not Supported 00:27:49.142 Command & Feature Lockdown Capability: Not Supported 00:27:49.142 Abort Command Limit: 4 00:27:49.142 Async Event Request Limit: 4 00:27:49.142 Number of Firmware Slots: N/A 00:27:49.142 Firmware Slot 1 Read-Only: N/A 00:27:49.142 Firmware Activation Without Reset: N/A 00:27:49.142 Multiple Update Detection Support: N/A 00:27:49.142 Firmware Update Granularity: No Information Provided 00:27:49.142 Per-Namespace SMART Log: Yes 00:27:49.142 Asymmetric Namespace Access Log Page: Supported 00:27:49.142 ANA Transition Time : 10 sec 00:27:49.142 00:27:49.142 Asymmetric Namespace Access Capabilities 00:27:49.142 ANA Optimized State : Supported 00:27:49.142 ANA Non-Optimized State : Supported 00:27:49.142 ANA Inaccessible State : Supported 00:27:49.142 ANA Persistent Loss State : Supported 00:27:49.142 ANA Change State : Supported 00:27:49.142 ANAGRPID is not changed : No 00:27:49.142 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:49.142 00:27:49.142 ANA Group Identifier Maximum : 128 00:27:49.142 Number of ANA Group Identifiers : 128 00:27:49.142 Max Number of Allowed Namespaces : 1024 00:27:49.142 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:49.142 Command Effects Log Page: Supported 00:27:49.142 Get Log Page Extended Data: Supported 00:27:49.142 Telemetry Log Pages: Not Supported 00:27:49.142 Persistent Event Log Pages: Not Supported 00:27:49.142 Supported Log Pages Log Page: May Support 00:27:49.143 Commands Supported & Effects Log Page: Not Supported 00:27:49.143 Feature Identifiers & Effects Log Page:May Support 00:27:49.143 NVMe-MI Commands & Effects Log Page: May Support 00:27:49.143 Data Area 4 for Telemetry Log: Not Supported 00:27:49.143 Error Log Page Entries Supported: 128 00:27:49.143 Keep Alive: Supported 00:27:49.143 Keep Alive Granularity: 1000 ms 00:27:49.143 00:27:49.143 NVM Command Set Attributes 00:27:49.143 ========================== 00:27:49.143 Submission Queue Entry Size 00:27:49.143 Max: 64 00:27:49.143 Min: 64 00:27:49.143 Completion Queue Entry Size 00:27:49.143 Max: 16 00:27:49.143 Min: 16 00:27:49.143 Number of Namespaces: 1024 00:27:49.143 Compare Command: Not Supported 00:27:49.143 Write Uncorrectable Command: Not Supported 00:27:49.143 Dataset Management Command: Supported 00:27:49.143 Write Zeroes Command: Supported 00:27:49.143 Set Features Save Field: Not Supported 00:27:49.143 Reservations: Not Supported 00:27:49.143 Timestamp: Not Supported 00:27:49.143 Copy: Not Supported 00:27:49.143 Volatile Write Cache: Present 00:27:49.143 Atomic Write Unit (Normal): 1 00:27:49.143 Atomic Write Unit (PFail): 1 00:27:49.143 Atomic Compare & Write Unit: 1 00:27:49.143 Fused Compare & Write: Not Supported 00:27:49.143 Scatter-Gather List 00:27:49.143 SGL Command Set: Supported 00:27:49.143 SGL Keyed: Not Supported 00:27:49.143 SGL Bit Bucket Descriptor: Not Supported 00:27:49.143 SGL Metadata Pointer: Not Supported 00:27:49.143 Oversized SGL: Not Supported 00:27:49.143 SGL Metadata Address: Not Supported 00:27:49.143 SGL Offset: Supported 00:27:49.143 Transport SGL Data Block: Not Supported 00:27:49.143 Replay Protected Memory Block: Not Supported 00:27:49.143 00:27:49.143 Firmware Slot Information 00:27:49.143 ========================= 00:27:49.143 Active slot: 0 00:27:49.143 00:27:49.143 Asymmetric Namespace Access 00:27:49.143 =========================== 00:27:49.143 Change Count : 0 00:27:49.143 Number of ANA Group Descriptors : 1 00:27:49.143 ANA Group Descriptor : 0 00:27:49.143 ANA Group ID : 1 00:27:49.143 Number of NSID Values : 1 00:27:49.143 Change Count : 0 00:27:49.143 ANA State : 1 00:27:49.143 Namespace Identifier : 1 00:27:49.143 00:27:49.143 Commands Supported and Effects 00:27:49.143 ============================== 00:27:49.143 Admin Commands 00:27:49.143 -------------- 00:27:49.143 Get Log Page (02h): Supported 00:27:49.143 Identify (06h): Supported 00:27:49.143 Abort (08h): Supported 00:27:49.143 Set Features (09h): Supported 00:27:49.143 Get Features (0Ah): Supported 00:27:49.143 Asynchronous Event Request (0Ch): Supported 00:27:49.143 Keep Alive (18h): Supported 00:27:49.143 I/O Commands 00:27:49.143 ------------ 00:27:49.143 Flush (00h): Supported 00:27:49.143 Write (01h): Supported LBA-Change 00:27:49.143 Read (02h): Supported 00:27:49.143 Write Zeroes (08h): Supported LBA-Change 00:27:49.143 Dataset Management (09h): Supported 00:27:49.143 00:27:49.143 Error Log 00:27:49.143 ========= 00:27:49.143 Entry: 0 00:27:49.143 Error Count: 0x3 00:27:49.143 Submission Queue Id: 0x0 00:27:49.143 Command Id: 0x5 00:27:49.143 Phase Bit: 0 00:27:49.143 Status Code: 0x2 00:27:49.143 Status Code Type: 0x0 00:27:49.143 Do Not Retry: 1 00:27:49.143 Error Location: 0x28 00:27:49.143 LBA: 0x0 00:27:49.143 Namespace: 0x0 00:27:49.143 Vendor Log Page: 0x0 00:27:49.143 ----------- 00:27:49.143 Entry: 1 00:27:49.143 Error Count: 0x2 00:27:49.143 Submission Queue Id: 0x0 00:27:49.143 Command Id: 0x5 00:27:49.143 Phase Bit: 0 00:27:49.143 Status Code: 0x2 00:27:49.143 Status Code Type: 0x0 00:27:49.143 Do Not Retry: 1 00:27:49.143 Error Location: 0x28 00:27:49.143 LBA: 0x0 00:27:49.143 Namespace: 0x0 00:27:49.143 Vendor Log Page: 0x0 00:27:49.143 ----------- 00:27:49.143 Entry: 2 00:27:49.143 Error Count: 0x1 00:27:49.143 Submission Queue Id: 0x0 00:27:49.143 Command Id: 0x4 00:27:49.143 Phase Bit: 0 00:27:49.143 Status Code: 0x2 00:27:49.143 Status Code Type: 0x0 00:27:49.143 Do Not Retry: 1 00:27:49.143 Error Location: 0x28 00:27:49.143 LBA: 0x0 00:27:49.143 Namespace: 0x0 00:27:49.143 Vendor Log Page: 0x0 00:27:49.143 00:27:49.143 Number of Queues 00:27:49.143 ================ 00:27:49.143 Number of I/O Submission Queues: 128 00:27:49.143 Number of I/O Completion Queues: 128 00:27:49.143 00:27:49.143 ZNS Specific Controller Data 00:27:49.143 ============================ 00:27:49.143 Zone Append Size Limit: 0 00:27:49.143 00:27:49.143 00:27:49.143 Active Namespaces 00:27:49.143 ================= 00:27:49.143 get_feature(0x05) failed 00:27:49.143 Namespace ID:1 00:27:49.143 Command Set Identifier: NVM (00h) 00:27:49.143 Deallocate: Supported 00:27:49.143 Deallocated/Unwritten Error: Not Supported 00:27:49.143 Deallocated Read Value: Unknown 00:27:49.143 Deallocate in Write Zeroes: Not Supported 00:27:49.143 Deallocated Guard Field: 0xFFFF 00:27:49.143 Flush: Supported 00:27:49.143 Reservation: Not Supported 00:27:49.143 Namespace Sharing Capabilities: Multiple Controllers 00:27:49.143 Size (in LBAs): 1953525168 (931GiB) 00:27:49.143 Capacity (in LBAs): 1953525168 (931GiB) 00:27:49.143 Utilization (in LBAs): 1953525168 (931GiB) 00:27:49.143 UUID: 5b1a3800-380a-4294-ae19-57046526b31a 00:27:49.143 Thin Provisioning: Not Supported 00:27:49.143 Per-NS Atomic Units: Yes 00:27:49.143 Atomic Boundary Size (Normal): 0 00:27:49.143 Atomic Boundary Size (PFail): 0 00:27:49.143 Atomic Boundary Offset: 0 00:27:49.143 NGUID/EUI64 Never Reused: No 00:27:49.143 ANA group ID: 1 00:27:49.143 Namespace Write Protected: No 00:27:49.143 Number of LBA Formats: 1 00:27:49.143 Current LBA Format: LBA Format #00 00:27:49.143 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:49.143 00:27:49.143 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:49.143 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:49.143 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:27:49.143 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:49.143 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:27:49.143 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:49.143 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:49.143 rmmod nvme_tcp 00:27:49.143 rmmod nvme_fabrics 00:27:49.143 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:49.143 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:27:49.143 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:27:49.143 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:27:49.143 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:49.143 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:49.143 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:49.143 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:27:49.143 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:27:49.143 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:49.143 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:27:49.143 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:49.143 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:49.143 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.143 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:49.143 20:47:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.707 20:47:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:51.707 20:47:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:51.707 20:47:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:51.707 20:47:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:27:51.707 20:47:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:51.707 20:47:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:51.707 20:47:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:51.707 20:47:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:51.707 20:47:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:51.707 20:47:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:51.707 20:47:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:54.239 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:54.239 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:54.239 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:54.239 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:54.239 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:54.239 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:54.239 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:54.239 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:54.239 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:54.239 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:54.239 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:54.239 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:54.239 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:54.239 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:54.239 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:54.239 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:55.176 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:27:55.176 00:27:55.176 real 0m16.788s 00:27:55.176 user 0m4.291s 00:27:55.176 sys 0m8.803s 00:27:55.176 20:47:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:55.176 20:47:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:55.176 ************************************ 00:27:55.176 END TEST nvmf_identify_kernel_target 00:27:55.176 ************************************ 00:27:55.176 20:47:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:55.176 20:47:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:55.176 20:47:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:55.176 20:47:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.435 ************************************ 00:27:55.435 START TEST nvmf_auth_host 00:27:55.435 ************************************ 00:27:55.435 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:55.435 * Looking for test storage... 00:27:55.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:55.435 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:55.435 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:27:55.435 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:55.435 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:55.435 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:55.435 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:55.435 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:55.435 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:55.435 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:55.435 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:55.435 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:55.435 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:55.435 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:55.435 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:55.435 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:55.435 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:55.435 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:55.435 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:55.435 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:55.435 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:55.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.436 --rc genhtml_branch_coverage=1 00:27:55.436 --rc genhtml_function_coverage=1 00:27:55.436 --rc genhtml_legend=1 00:27:55.436 --rc geninfo_all_blocks=1 00:27:55.436 --rc geninfo_unexecuted_blocks=1 00:27:55.436 00:27:55.436 ' 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:55.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.436 --rc genhtml_branch_coverage=1 00:27:55.436 --rc genhtml_function_coverage=1 00:27:55.436 --rc genhtml_legend=1 00:27:55.436 --rc geninfo_all_blocks=1 00:27:55.436 --rc geninfo_unexecuted_blocks=1 00:27:55.436 00:27:55.436 ' 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:55.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.436 --rc genhtml_branch_coverage=1 00:27:55.436 --rc genhtml_function_coverage=1 00:27:55.436 --rc genhtml_legend=1 00:27:55.436 --rc geninfo_all_blocks=1 00:27:55.436 --rc geninfo_unexecuted_blocks=1 00:27:55.436 00:27:55.436 ' 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:55.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.436 --rc genhtml_branch_coverage=1 00:27:55.436 --rc genhtml_function_coverage=1 00:27:55.436 --rc genhtml_legend=1 00:27:55.436 --rc geninfo_all_blocks=1 00:27:55.436 --rc geninfo_unexecuted_blocks=1 00:27:55.436 00:27:55.436 ' 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:55.436 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:55.437 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:55.437 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:55.437 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:55.437 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:55.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:55.437 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:55.437 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:55.437 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:55.437 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:55.437 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:55.437 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:55.437 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:55.437 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:55.437 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:55.437 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:55.437 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:55.437 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:55.437 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:55.437 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:55.437 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:55.437 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:55.437 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:55.437 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.437 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:55.437 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.437 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:55.437 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:55.437 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:55.437 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.005 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:02.005 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:28:02.005 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:02.005 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:02.005 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:02.005 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:02.005 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:02.005 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:28:02.005 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:02.005 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:28:02.005 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:28:02.005 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:28:02.005 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:28:02.005 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:28:02.005 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:28:02.005 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:02.005 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:02.005 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:02.005 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:02.005 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:02.005 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:02.005 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:02.005 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:02.005 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:02.005 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:02.005 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:02.006 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:02.006 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:02.006 Found net devices under 0000:af:00.0: cvl_0_0 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:02.006 Found net devices under 0000:af:00.1: cvl_0_1 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:02.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:02.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:28:02.006 00:28:02.006 --- 10.0.0.2 ping statistics --- 00:28:02.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.006 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:02.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:02.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:28:02.006 00:28:02.006 --- 10.0.0.1 ping statistics --- 00:28:02.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.006 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=504894 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 504894 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 504894 ']' 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:02.006 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.006 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:02.006 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:02.006 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:02.006 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:02.006 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8b924194ee33a7bafb8f0cd44c47e84c 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.kcZ 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8b924194ee33a7bafb8f0cd44c47e84c 0 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8b924194ee33a7bafb8f0cd44c47e84c 0 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8b924194ee33a7bafb8f0cd44c47e84c 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.kcZ 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.kcZ 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.kcZ 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=aeff1bae45ffcb2def01dfd4531c1a4024d263f0c30ccdcd3b54f781e75f1c6a 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Pod 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key aeff1bae45ffcb2def01dfd4531c1a4024d263f0c30ccdcd3b54f781e75f1c6a 3 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 aeff1bae45ffcb2def01dfd4531c1a4024d263f0c30ccdcd3b54f781e75f1c6a 3 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=aeff1bae45ffcb2def01dfd4531c1a4024d263f0c30ccdcd3b54f781e75f1c6a 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Pod 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Pod 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Pod 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c87ed5a8abf3cc6bdc7637c77d6e594d209f136b1c766cac 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.NzN 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c87ed5a8abf3cc6bdc7637c77d6e594d209f136b1c766cac 0 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c87ed5a8abf3cc6bdc7637c77d6e594d209f136b1c766cac 0 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c87ed5a8abf3cc6bdc7637c77d6e594d209f136b1c766cac 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.NzN 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.NzN 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.NzN 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=24c9f189733bc36468b7276791c528e2cbe8e366a1a390e5 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.pdc 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 24c9f189733bc36468b7276791c528e2cbe8e366a1a390e5 2 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 24c9f189733bc36468b7276791c528e2cbe8e366a1a390e5 2 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=24c9f189733bc36468b7276791c528e2cbe8e366a1a390e5 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.pdc 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.pdc 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.pdc 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8811a5043cfbcf376d1925752d817239 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Dwx 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8811a5043cfbcf376d1925752d817239 1 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8811a5043cfbcf376d1925752d817239 1 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8811a5043cfbcf376d1925752d817239 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Dwx 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Dwx 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Dwx 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:02.007 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ccd3e5dd188143cec299663f1b1a555a 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.HYn 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ccd3e5dd188143cec299663f1b1a555a 1 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ccd3e5dd188143cec299663f1b1a555a 1 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ccd3e5dd188143cec299663f1b1a555a 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.HYn 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.HYn 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.HYn 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0a951548b178c087f68b37b2ae1afee9dda1f39d927d467e 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ZaU 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0a951548b178c087f68b37b2ae1afee9dda1f39d927d467e 2 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0a951548b178c087f68b37b2ae1afee9dda1f39d927d467e 2 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0a951548b178c087f68b37b2ae1afee9dda1f39d927d467e 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:02.008 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:02.266 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ZaU 00:28:02.266 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ZaU 00:28:02.266 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.ZaU 00:28:02.266 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:02.266 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:02.266 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:02.266 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:02.266 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:02.266 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:02.266 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:02.266 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ccc4471d73a3151ecf6b39f2b4c269e8 00:28:02.266 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:02.266 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Nte 00:28:02.266 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ccc4471d73a3151ecf6b39f2b4c269e8 0 00:28:02.266 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ccc4471d73a3151ecf6b39f2b4c269e8 0 00:28:02.266 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:02.266 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:02.266 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ccc4471d73a3151ecf6b39f2b4c269e8 00:28:02.266 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:02.266 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:02.266 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Nte 00:28:02.266 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Nte 00:28:02.266 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Nte 00:28:02.266 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:02.267 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:02.267 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:02.267 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:02.267 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:02.267 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:02.267 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:02.267 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d2ccdfe666add4bfe71514770ea0ad10684cf735867f84c3882d594e71f3283c 00:28:02.267 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:02.267 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.kE6 00:28:02.267 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d2ccdfe666add4bfe71514770ea0ad10684cf735867f84c3882d594e71f3283c 3 00:28:02.267 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d2ccdfe666add4bfe71514770ea0ad10684cf735867f84c3882d594e71f3283c 3 00:28:02.267 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:02.267 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:02.267 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d2ccdfe666add4bfe71514770ea0ad10684cf735867f84c3882d594e71f3283c 00:28:02.267 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:02.267 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:02.267 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.kE6 00:28:02.267 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.kE6 00:28:02.267 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.kE6 00:28:02.267 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:02.267 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 504894 00:28:02.267 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 504894 ']' 00:28:02.267 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:02.267 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:02.267 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:02.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:02.267 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:02.267 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.526 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:02.526 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:02.526 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:02.526 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kcZ 00:28:02.526 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.526 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.526 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.526 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Pod ]] 00:28:02.526 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Pod 00:28:02.526 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.526 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.526 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.526 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:02.526 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.NzN 00:28:02.526 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.526 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.526 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.526 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.pdc ]] 00:28:02.526 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.pdc 00:28:02.526 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.526 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Dwx 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.HYn ]] 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HYn 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ZaU 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Nte ]] 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Nte 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.kE6 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:02.527 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:05.812 Waiting for block devices as requested 00:28:05.812 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:28:05.812 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:05.812 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:05.812 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:05.812 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:05.812 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:05.812 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:05.812 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:05.812 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:06.070 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:06.070 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:06.070 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:06.327 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:06.327 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:06.327 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:06.327 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:06.585 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:07.154 No valid GPT data, bailing 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:28:07.154 00:28:07.154 Discovery Log Number of Records 2, Generation counter 2 00:28:07.154 =====Discovery Log Entry 0====== 00:28:07.154 trtype: tcp 00:28:07.154 adrfam: ipv4 00:28:07.154 subtype: current discovery subsystem 00:28:07.154 treq: not specified, sq flow control disable supported 00:28:07.154 portid: 1 00:28:07.154 trsvcid: 4420 00:28:07.154 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:07.154 traddr: 10.0.0.1 00:28:07.154 eflags: none 00:28:07.154 sectype: none 00:28:07.154 =====Discovery Log Entry 1====== 00:28:07.154 trtype: tcp 00:28:07.154 adrfam: ipv4 00:28:07.154 subtype: nvme subsystem 00:28:07.154 treq: not specified, sq flow control disable supported 00:28:07.154 portid: 1 00:28:07.154 trsvcid: 4420 00:28:07.154 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:07.154 traddr: 10.0.0.1 00:28:07.154 eflags: none 00:28:07.154 sectype: none 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:07.154 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: ]] 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.415 nvme0n1 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.415 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI5MjQxOTRlZTMzYTdiYWZiOGYwY2Q0NGM0N2U4NGMhK2tm: 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI5MjQxOTRlZTMzYTdiYWZiOGYwY2Q0NGM0N2U4NGMhK2tm: 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: ]] 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.674 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.674 nvme0n1 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: ]] 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.674 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.933 nvme0n1 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:07.933 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:07.934 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:07.934 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:07.934 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:07.934 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:07.934 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:07.934 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: ]] 00:28:07.934 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:07.934 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:07.934 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.934 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:07.934 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:07.934 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:07.934 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.934 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:07.934 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.934 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.934 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.934 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.934 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.934 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.934 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.934 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.934 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.934 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.934 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.934 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.934 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.934 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.934 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:07.934 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.934 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.192 nvme0n1 00:28:08.192 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.192 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.192 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.192 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.192 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.192 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.192 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.192 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.192 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.192 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.192 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.192 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.192 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:08.192 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.192 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:08.192 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:08.192 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:08.192 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGE5NTE1NDhiMTc4YzA4N2Y2OGIzN2IyYWUxYWZlZTlkZGExZjM5ZDkyN2Q0NjdlpU+PgA==: 00:28:08.192 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: 00:28:08.192 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:08.192 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:08.192 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGE5NTE1NDhiMTc4YzA4N2Y2OGIzN2IyYWUxYWZlZTlkZGExZjM5ZDkyN2Q0NjdlpU+PgA==: 00:28:08.192 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: ]] 00:28:08.192 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: 00:28:08.192 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:08.192 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.192 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:08.192 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:08.192 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:08.193 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.193 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:08.193 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.193 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.193 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.193 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.193 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.193 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.193 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.193 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.193 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.193 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.193 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.193 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.193 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.193 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.193 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:08.193 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.193 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.451 nvme0n1 00:28:08.451 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.451 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.451 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.451 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.451 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.451 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.451 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.451 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.451 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.451 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.451 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.451 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.451 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:08.451 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.451 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:08.451 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:08.451 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:08.451 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDJjY2RmZTY2NmFkZDRiZmU3MTUxNDc3MGVhMGFkMTA2ODRjZjczNTg2N2Y4NGMzODgyZDU5NGU3MWYzMjgzY8q1Qjw=: 00:28:08.451 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:08.451 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:08.451 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:08.451 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDJjY2RmZTY2NmFkZDRiZmU3MTUxNDc3MGVhMGFkMTA2ODRjZjczNTg2N2Y4NGMzODgyZDU5NGU3MWYzMjgzY8q1Qjw=: 00:28:08.451 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:08.451 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:08.451 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.451 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:08.451 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:08.451 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:08.451 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.452 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:08.452 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.452 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.452 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.452 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.452 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.452 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.452 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.452 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.452 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.452 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.452 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.452 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.452 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.452 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.452 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:08.452 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.452 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.710 nvme0n1 00:28:08.710 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.710 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.710 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.710 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.710 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.710 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.710 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.710 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.710 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.710 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.710 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.710 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:08.710 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.710 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:08.710 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.710 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:08.710 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:08.710 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:08.711 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI5MjQxOTRlZTMzYTdiYWZiOGYwY2Q0NGM0N2U4NGMhK2tm: 00:28:08.711 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: 00:28:08.711 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:08.711 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI5MjQxOTRlZTMzYTdiYWZiOGYwY2Q0NGM0N2U4NGMhK2tm: 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: ]] 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.994 nvme0n1 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.994 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.253 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: ]] 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.254 nvme0n1 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.254 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: ]] 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.514 nvme0n1 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.514 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.773 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.773 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.773 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:09.773 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.773 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:09.773 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:09.773 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:09.773 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGE5NTE1NDhiMTc4YzA4N2Y2OGIzN2IyYWUxYWZlZTlkZGExZjM5ZDkyN2Q0NjdlpU+PgA==: 00:28:09.773 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: 00:28:09.773 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:09.773 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:09.773 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGE5NTE1NDhiMTc4YzA4N2Y2OGIzN2IyYWUxYWZlZTlkZGExZjM5ZDkyN2Q0NjdlpU+PgA==: 00:28:09.773 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: ]] 00:28:09.773 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: 00:28:09.773 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:09.773 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.773 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:09.773 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:09.773 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:09.773 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.773 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:09.773 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.773 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.773 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.774 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.774 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.774 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.774 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.774 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.774 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.774 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.774 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.774 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.774 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.774 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.774 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:09.774 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.774 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.774 nvme0n1 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDJjY2RmZTY2NmFkZDRiZmU3MTUxNDc3MGVhMGFkMTA2ODRjZjczNTg2N2Y4NGMzODgyZDU5NGU3MWYzMjgzY8q1Qjw=: 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDJjY2RmZTY2NmFkZDRiZmU3MTUxNDc3MGVhMGFkMTA2ODRjZjczNTg2N2Y4NGMzODgyZDU5NGU3MWYzMjgzY8q1Qjw=: 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.774 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.033 nvme0n1 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI5MjQxOTRlZTMzYTdiYWZiOGYwY2Q0NGM0N2U4NGMhK2tm: 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:10.033 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:10.601 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI5MjQxOTRlZTMzYTdiYWZiOGYwY2Q0NGM0N2U4NGMhK2tm: 00:28:10.601 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: ]] 00:28:10.601 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: 00:28:10.601 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:10.601 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.601 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:10.601 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:10.601 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:10.601 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.601 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:10.601 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.601 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.601 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.601 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.601 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:10.601 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:10.601 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:10.601 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.601 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.601 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:10.601 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.601 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:10.601 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:10.601 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:10.601 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:10.601 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.601 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.861 nvme0n1 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: ]] 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.861 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.121 nvme0n1 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: ]] 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.121 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.380 nvme0n1 00:28:11.380 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.380 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.380 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.380 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.380 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.380 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.380 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.380 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.380 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.380 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.380 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.380 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.380 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:11.380 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.380 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGE5NTE1NDhiMTc4YzA4N2Y2OGIzN2IyYWUxYWZlZTlkZGExZjM5ZDkyN2Q0NjdlpU+PgA==: 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGE5NTE1NDhiMTc4YzA4N2Y2OGIzN2IyYWUxYWZlZTlkZGExZjM5ZDkyN2Q0NjdlpU+PgA==: 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: ]] 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.381 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.639 nvme0n1 00:28:11.639 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.639 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.639 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.639 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.639 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.639 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDJjY2RmZTY2NmFkZDRiZmU3MTUxNDc3MGVhMGFkMTA2ODRjZjczNTg2N2Y4NGMzODgyZDU5NGU3MWYzMjgzY8q1Qjw=: 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDJjY2RmZTY2NmFkZDRiZmU3MTUxNDc3MGVhMGFkMTA2ODRjZjczNTg2N2Y4NGMzODgyZDU5NGU3MWYzMjgzY8q1Qjw=: 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.906 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.164 nvme0n1 00:28:12.164 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.164 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.164 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.164 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.164 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.164 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.164 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.164 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.164 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.164 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.164 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.164 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:12.164 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.164 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:12.164 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.164 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:12.164 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:12.164 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:12.164 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI5MjQxOTRlZTMzYTdiYWZiOGYwY2Q0NGM0N2U4NGMhK2tm: 00:28:12.164 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: 00:28:12.164 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:12.164 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:13.539 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI5MjQxOTRlZTMzYTdiYWZiOGYwY2Q0NGM0N2U4NGMhK2tm: 00:28:13.539 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: ]] 00:28:13.539 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: 00:28:13.539 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:13.539 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.540 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:13.540 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:13.540 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:13.540 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.540 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:13.540 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.540 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.540 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.540 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.540 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:13.540 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:13.540 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:13.540 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.540 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.540 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:13.540 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.540 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:13.540 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:13.540 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:13.540 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:13.540 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.540 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.540 nvme0n1 00:28:13.540 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.540 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.540 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.540 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.540 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.799 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: ]] 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.799 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.058 nvme0n1 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: ]] 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.058 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.059 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.059 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.059 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.317 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.317 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:14.317 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.317 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.576 nvme0n1 00:28:14.576 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.576 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGE5NTE1NDhiMTc4YzA4N2Y2OGIzN2IyYWUxYWZlZTlkZGExZjM5ZDkyN2Q0NjdlpU+PgA==: 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGE5NTE1NDhiMTc4YzA4N2Y2OGIzN2IyYWUxYWZlZTlkZGExZjM5ZDkyN2Q0NjdlpU+PgA==: 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: ]] 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.577 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.145 nvme0n1 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDJjY2RmZTY2NmFkZDRiZmU3MTUxNDc3MGVhMGFkMTA2ODRjZjczNTg2N2Y4NGMzODgyZDU5NGU3MWYzMjgzY8q1Qjw=: 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDJjY2RmZTY2NmFkZDRiZmU3MTUxNDc3MGVhMGFkMTA2ODRjZjczNTg2N2Y4NGMzODgyZDU5NGU3MWYzMjgzY8q1Qjw=: 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.145 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.404 nvme0n1 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI5MjQxOTRlZTMzYTdiYWZiOGYwY2Q0NGM0N2U4NGMhK2tm: 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI5MjQxOTRlZTMzYTdiYWZiOGYwY2Q0NGM0N2U4NGMhK2tm: 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: ]] 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.404 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.971 nvme0n1 00:28:15.971 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.971 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.971 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.971 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.971 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.971 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.971 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.971 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.972 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.972 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.972 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.972 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.972 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:15.972 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.972 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:15.972 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:15.972 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:15.972 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:15.972 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:15.972 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:15.972 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:15.972 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:15.972 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: ]] 00:28:16.230 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:16.230 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:16.230 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.230 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:16.230 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:16.230 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:16.230 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.230 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:16.230 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.230 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.230 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.230 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.230 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.230 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.230 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.230 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.230 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.230 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.230 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.230 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.231 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.231 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.231 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:16.231 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.231 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.798 nvme0n1 00:28:16.798 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.798 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.798 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.798 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.798 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.798 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: ]] 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.798 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.365 nvme0n1 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGE5NTE1NDhiMTc4YzA4N2Y2OGIzN2IyYWUxYWZlZTlkZGExZjM5ZDkyN2Q0NjdlpU+PgA==: 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGE5NTE1NDhiMTc4YzA4N2Y2OGIzN2IyYWUxYWZlZTlkZGExZjM5ZDkyN2Q0NjdlpU+PgA==: 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: ]] 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.365 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.932 nvme0n1 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDJjY2RmZTY2NmFkZDRiZmU3MTUxNDc3MGVhMGFkMTA2ODRjZjczNTg2N2Y4NGMzODgyZDU5NGU3MWYzMjgzY8q1Qjw=: 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDJjY2RmZTY2NmFkZDRiZmU3MTUxNDc3MGVhMGFkMTA2ODRjZjczNTg2N2Y4NGMzODgyZDU5NGU3MWYzMjgzY8q1Qjw=: 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.932 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.499 nvme0n1 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI5MjQxOTRlZTMzYTdiYWZiOGYwY2Q0NGM0N2U4NGMhK2tm: 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI5MjQxOTRlZTMzYTdiYWZiOGYwY2Q0NGM0N2U4NGMhK2tm: 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: ]] 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.499 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.500 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.500 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.500 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.500 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.500 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.500 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.500 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.500 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.500 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.500 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.500 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.500 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.500 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:18.500 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.500 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.758 nvme0n1 00:28:18.758 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: ]] 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.759 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.018 nvme0n1 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: ]] 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:19.018 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.019 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.277 nvme0n1 00:28:19.277 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.277 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.277 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.277 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.277 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.277 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.277 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.277 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.277 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.277 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.277 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.277 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.277 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:19.277 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.277 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:19.277 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:19.277 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:19.277 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGE5NTE1NDhiMTc4YzA4N2Y2OGIzN2IyYWUxYWZlZTlkZGExZjM5ZDkyN2Q0NjdlpU+PgA==: 00:28:19.277 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: 00:28:19.277 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:19.277 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:19.277 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGE5NTE1NDhiMTc4YzA4N2Y2OGIzN2IyYWUxYWZlZTlkZGExZjM5ZDkyN2Q0NjdlpU+PgA==: 00:28:19.277 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: ]] 00:28:19.277 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: 00:28:19.277 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:19.277 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.278 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:19.278 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:19.278 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:19.278 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.278 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:19.278 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.278 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.278 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.278 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.278 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.278 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.278 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.278 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.278 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.278 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.278 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.278 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.278 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.278 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.278 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:19.278 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.278 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.537 nvme0n1 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDJjY2RmZTY2NmFkZDRiZmU3MTUxNDc3MGVhMGFkMTA2ODRjZjczNTg2N2Y4NGMzODgyZDU5NGU3MWYzMjgzY8q1Qjw=: 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDJjY2RmZTY2NmFkZDRiZmU3MTUxNDc3MGVhMGFkMTA2ODRjZjczNTg2N2Y4NGMzODgyZDU5NGU3MWYzMjgzY8q1Qjw=: 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.537 nvme0n1 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.537 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.796 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.796 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.796 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.796 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI5MjQxOTRlZTMzYTdiYWZiOGYwY2Q0NGM0N2U4NGMhK2tm: 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI5MjQxOTRlZTMzYTdiYWZiOGYwY2Q0NGM0N2U4NGMhK2tm: 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: ]] 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.796 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.797 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.797 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.797 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.797 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:19.797 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.797 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.797 nvme0n1 00:28:19.797 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.797 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.797 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.797 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.797 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.797 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: ]] 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:20.056 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.057 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:20.057 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:20.057 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:20.057 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:20.057 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.057 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.057 nvme0n1 00:28:20.057 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.057 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.057 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.057 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.057 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.057 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: ]] 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.316 nvme0n1 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.316 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.575 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.575 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.575 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:20.575 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.575 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:20.575 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:20.575 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:20.575 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGE5NTE1NDhiMTc4YzA4N2Y2OGIzN2IyYWUxYWZlZTlkZGExZjM5ZDkyN2Q0NjdlpU+PgA==: 00:28:20.575 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: 00:28:20.575 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:20.575 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:20.575 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGE5NTE1NDhiMTc4YzA4N2Y2OGIzN2IyYWUxYWZlZTlkZGExZjM5ZDkyN2Q0NjdlpU+PgA==: 00:28:20.575 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: ]] 00:28:20.575 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: 00:28:20.575 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:20.575 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.575 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:20.575 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:20.575 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:20.575 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.576 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:20.576 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.576 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.576 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.576 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.576 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:20.576 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:20.576 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:20.576 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.576 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.576 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:20.576 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.576 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:20.576 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:20.576 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:20.576 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:20.576 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.576 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.576 nvme0n1 00:28:20.576 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.576 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.576 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.576 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.576 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.576 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.576 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.576 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.576 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.576 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.576 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.576 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.576 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:20.576 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.576 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:20.576 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:20.576 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:20.576 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDJjY2RmZTY2NmFkZDRiZmU3MTUxNDc3MGVhMGFkMTA2ODRjZjczNTg2N2Y4NGMzODgyZDU5NGU3MWYzMjgzY8q1Qjw=: 00:28:20.576 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:20.576 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:20.576 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:20.576 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDJjY2RmZTY2NmFkZDRiZmU3MTUxNDc3MGVhMGFkMTA2ODRjZjczNTg2N2Y4NGMzODgyZDU5NGU3MWYzMjgzY8q1Qjw=: 00:28:20.576 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:20.576 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.835 nvme0n1 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.835 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.836 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.836 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:20.836 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.836 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:20.836 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.836 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:20.836 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:20.836 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:20.836 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI5MjQxOTRlZTMzYTdiYWZiOGYwY2Q0NGM0N2U4NGMhK2tm: 00:28:20.836 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: 00:28:20.836 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:20.836 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:20.836 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI5MjQxOTRlZTMzYTdiYWZiOGYwY2Q0NGM0N2U4NGMhK2tm: 00:28:20.836 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: ]] 00:28:20.836 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: 00:28:20.836 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:20.836 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.836 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:20.836 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:21.095 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:21.095 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.095 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:21.095 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.095 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.095 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.095 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.095 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:21.095 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:21.095 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:21.095 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.095 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.095 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:21.095 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.095 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:21.095 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:21.095 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:21.095 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:21.095 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.095 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.095 nvme0n1 00:28:21.095 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.095 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: ]] 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.619 nvme0n1 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: ]] 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.619 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.620 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:21.620 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.620 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:21.620 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:21.620 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:21.620 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:21.620 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.620 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.878 nvme0n1 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGE5NTE1NDhiMTc4YzA4N2Y2OGIzN2IyYWUxYWZlZTlkZGExZjM5ZDkyN2Q0NjdlpU+PgA==: 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGE5NTE1NDhiMTc4YzA4N2Y2OGIzN2IyYWUxYWZlZTlkZGExZjM5ZDkyN2Q0NjdlpU+PgA==: 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: ]] 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.878 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.137 nvme0n1 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDJjY2RmZTY2NmFkZDRiZmU3MTUxNDc3MGVhMGFkMTA2ODRjZjczNTg2N2Y4NGMzODgyZDU5NGU3MWYzMjgzY8q1Qjw=: 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDJjY2RmZTY2NmFkZDRiZmU3MTUxNDc3MGVhMGFkMTA2ODRjZjczNTg2N2Y4NGMzODgyZDU5NGU3MWYzMjgzY8q1Qjw=: 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.137 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:22.138 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:22.138 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:22.138 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.138 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.138 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:22.138 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.138 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:22.138 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:22.138 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:22.138 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:22.138 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.138 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.395 nvme0n1 00:28:22.395 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.395 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.395 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.395 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.395 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.395 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI5MjQxOTRlZTMzYTdiYWZiOGYwY2Q0NGM0N2U4NGMhK2tm: 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI5MjQxOTRlZTMzYTdiYWZiOGYwY2Q0NGM0N2U4NGMhK2tm: 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: ]] 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.652 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.909 nvme0n1 00:28:22.909 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.909 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.909 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.909 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.909 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.909 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.909 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.909 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.909 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.909 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.909 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.909 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.909 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:22.909 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.909 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:22.909 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:22.909 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:22.910 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:22.910 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:22.910 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:22.910 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:22.910 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:22.910 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: ]] 00:28:22.910 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:22.910 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:22.910 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.910 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:22.910 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:22.910 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:22.910 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.910 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:22.910 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.910 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.910 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.910 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.910 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:22.910 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:22.910 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:22.910 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.910 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.910 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:22.910 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.910 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:22.910 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:22.910 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:22.910 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:22.910 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.910 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.475 nvme0n1 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: ]] 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.475 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.733 nvme0n1 00:28:23.733 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.733 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.733 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.733 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.733 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.733 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.733 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.733 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.733 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.733 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGE5NTE1NDhiMTc4YzA4N2Y2OGIzN2IyYWUxYWZlZTlkZGExZjM5ZDkyN2Q0NjdlpU+PgA==: 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGE5NTE1NDhiMTc4YzA4N2Y2OGIzN2IyYWUxYWZlZTlkZGExZjM5ZDkyN2Q0NjdlpU+PgA==: 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: ]] 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.991 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.250 nvme0n1 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDJjY2RmZTY2NmFkZDRiZmU3MTUxNDc3MGVhMGFkMTA2ODRjZjczNTg2N2Y4NGMzODgyZDU5NGU3MWYzMjgzY8q1Qjw=: 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDJjY2RmZTY2NmFkZDRiZmU3MTUxNDc3MGVhMGFkMTA2ODRjZjczNTg2N2Y4NGMzODgyZDU5NGU3MWYzMjgzY8q1Qjw=: 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.250 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.816 nvme0n1 00:28:24.816 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.816 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.816 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.816 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.816 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.816 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI5MjQxOTRlZTMzYTdiYWZiOGYwY2Q0NGM0N2U4NGMhK2tm: 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI5MjQxOTRlZTMzYTdiYWZiOGYwY2Q0NGM0N2U4NGMhK2tm: 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: ]] 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.816 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.382 nvme0n1 00:28:25.382 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.382 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.382 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.382 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.382 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.382 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.382 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.382 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.382 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.382 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.382 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.382 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.382 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:25.382 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.382 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:25.382 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:25.382 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:25.382 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:25.382 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:25.382 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:25.382 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:25.382 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:25.382 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: ]] 00:28:25.382 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:25.383 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:25.383 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.383 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:25.383 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:25.383 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:25.383 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.383 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:25.383 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.383 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.383 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.383 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.383 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.383 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.383 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.383 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.383 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.383 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.383 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.383 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.383 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.383 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.383 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:25.383 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.383 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.950 nvme0n1 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: ]] 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.950 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.518 nvme0n1 00:28:26.518 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.518 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.518 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.518 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.518 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.518 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.518 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.518 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.518 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.518 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.518 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.518 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.518 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:26.518 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.518 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:26.518 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:26.518 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:26.518 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGE5NTE1NDhiMTc4YzA4N2Y2OGIzN2IyYWUxYWZlZTlkZGExZjM5ZDkyN2Q0NjdlpU+PgA==: 00:28:26.518 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: 00:28:26.518 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:26.518 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:26.518 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGE5NTE1NDhiMTc4YzA4N2Y2OGIzN2IyYWUxYWZlZTlkZGExZjM5ZDkyN2Q0NjdlpU+PgA==: 00:28:26.519 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: ]] 00:28:26.519 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: 00:28:26.519 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:26.519 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.519 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:26.519 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:26.519 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:26.519 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.519 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:26.519 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.519 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.519 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.519 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.519 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.519 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.519 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.519 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.519 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.519 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.519 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.519 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.519 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.519 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.519 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:26.519 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.519 20:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.087 nvme0n1 00:28:27.087 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.087 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.087 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.087 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.087 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.087 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.087 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.087 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.087 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.087 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.087 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.087 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.087 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:27.087 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.087 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:27.087 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:27.087 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:27.087 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDJjY2RmZTY2NmFkZDRiZmU3MTUxNDc3MGVhMGFkMTA2ODRjZjczNTg2N2Y4NGMzODgyZDU5NGU3MWYzMjgzY8q1Qjw=: 00:28:27.087 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:27.087 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:27.087 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:27.087 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDJjY2RmZTY2NmFkZDRiZmU3MTUxNDc3MGVhMGFkMTA2ODRjZjczNTg2N2Y4NGMzODgyZDU5NGU3MWYzMjgzY8q1Qjw=: 00:28:27.087 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:27.088 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:27.088 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.088 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:27.088 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:27.088 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:27.088 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.088 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:27.088 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.088 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.088 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.088 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.088 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.088 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.088 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.088 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.088 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.088 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.088 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.088 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.088 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.088 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.088 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:27.346 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.346 20:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.914 nvme0n1 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI5MjQxOTRlZTMzYTdiYWZiOGYwY2Q0NGM0N2U4NGMhK2tm: 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI5MjQxOTRlZTMzYTdiYWZiOGYwY2Q0NGM0N2U4NGMhK2tm: 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: ]] 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.914 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.915 nvme0n1 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: ]] 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.915 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.174 nvme0n1 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: ]] 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.174 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:28.175 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.175 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.434 nvme0n1 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGE5NTE1NDhiMTc4YzA4N2Y2OGIzN2IyYWUxYWZlZTlkZGExZjM5ZDkyN2Q0NjdlpU+PgA==: 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGE5NTE1NDhiMTc4YzA4N2Y2OGIzN2IyYWUxYWZlZTlkZGExZjM5ZDkyN2Q0NjdlpU+PgA==: 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: ]] 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.434 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.693 nvme0n1 00:28:28.693 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.693 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.693 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.693 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.693 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.693 20:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.693 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.693 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.693 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.693 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.693 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.693 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.693 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:28.693 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.693 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:28.693 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:28.694 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:28.694 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDJjY2RmZTY2NmFkZDRiZmU3MTUxNDc3MGVhMGFkMTA2ODRjZjczNTg2N2Y4NGMzODgyZDU5NGU3MWYzMjgzY8q1Qjw=: 00:28:28.694 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:28.694 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:28.694 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:28.694 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDJjY2RmZTY2NmFkZDRiZmU3MTUxNDc3MGVhMGFkMTA2ODRjZjczNTg2N2Y4NGMzODgyZDU5NGU3MWYzMjgzY8q1Qjw=: 00:28:28.694 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:28.694 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:28.694 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.694 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:28.694 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:28.694 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:28.694 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.694 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:28.694 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.694 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.694 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.694 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.694 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.694 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.694 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.694 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.694 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.694 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.694 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.694 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.694 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.694 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.694 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:28.694 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.694 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.953 nvme0n1 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI5MjQxOTRlZTMzYTdiYWZiOGYwY2Q0NGM0N2U4NGMhK2tm: 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI5MjQxOTRlZTMzYTdiYWZiOGYwY2Q0NGM0N2U4NGMhK2tm: 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: ]] 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.953 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.954 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.954 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.954 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.954 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.954 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.954 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:28.954 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.954 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.213 nvme0n1 00:28:29.213 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.213 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.213 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.213 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.213 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.213 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.213 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.213 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: ]] 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.214 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.473 nvme0n1 00:28:29.473 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.473 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.473 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.473 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.473 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.473 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.473 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.473 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.473 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.473 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.473 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.473 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.473 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:29.473 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.473 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:29.473 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:29.473 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:29.473 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:29.473 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:29.473 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:29.473 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:29.474 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:29.474 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: ]] 00:28:29.474 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:29.474 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:29.474 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.474 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:29.474 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:29.474 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:29.474 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.474 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:29.474 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.474 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.474 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.474 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.474 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.474 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.474 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.474 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.474 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.474 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.474 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.474 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.474 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.474 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.474 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:29.474 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.474 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.733 nvme0n1 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGE5NTE1NDhiMTc4YzA4N2Y2OGIzN2IyYWUxYWZlZTlkZGExZjM5ZDkyN2Q0NjdlpU+PgA==: 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGE5NTE1NDhiMTc4YzA4N2Y2OGIzN2IyYWUxYWZlZTlkZGExZjM5ZDkyN2Q0NjdlpU+PgA==: 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: ]] 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.733 20:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.733 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.733 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.733 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.733 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.733 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.733 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.733 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.733 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.733 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.733 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.733 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.733 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:29.733 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.733 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.993 nvme0n1 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDJjY2RmZTY2NmFkZDRiZmU3MTUxNDc3MGVhMGFkMTA2ODRjZjczNTg2N2Y4NGMzODgyZDU5NGU3MWYzMjgzY8q1Qjw=: 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDJjY2RmZTY2NmFkZDRiZmU3MTUxNDc3MGVhMGFkMTA2ODRjZjczNTg2N2Y4NGMzODgyZDU5NGU3MWYzMjgzY8q1Qjw=: 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.993 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.252 nvme0n1 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI5MjQxOTRlZTMzYTdiYWZiOGYwY2Q0NGM0N2U4NGMhK2tm: 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI5MjQxOTRlZTMzYTdiYWZiOGYwY2Q0NGM0N2U4NGMhK2tm: 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: ]] 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.252 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.511 nvme0n1 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: ]] 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.511 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.512 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.512 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.512 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:30.512 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:30.512 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:30.512 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.512 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.512 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:30.512 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.512 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:30.512 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:30.512 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:30.512 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:30.512 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.512 20:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.770 nvme0n1 00:28:30.770 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.770 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.770 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: ]] 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.771 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.030 nvme0n1 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGE5NTE1NDhiMTc4YzA4N2Y2OGIzN2IyYWUxYWZlZTlkZGExZjM5ZDkyN2Q0NjdlpU+PgA==: 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGE5NTE1NDhiMTc4YzA4N2Y2OGIzN2IyYWUxYWZlZTlkZGExZjM5ZDkyN2Q0NjdlpU+PgA==: 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: ]] 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.030 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.289 nvme0n1 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDJjY2RmZTY2NmFkZDRiZmU3MTUxNDc3MGVhMGFkMTA2ODRjZjczNTg2N2Y4NGMzODgyZDU5NGU3MWYzMjgzY8q1Qjw=: 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDJjY2RmZTY2NmFkZDRiZmU3MTUxNDc3MGVhMGFkMTA2ODRjZjczNTg2N2Y4NGMzODgyZDU5NGU3MWYzMjgzY8q1Qjw=: 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.289 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.548 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.548 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.548 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:31.548 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:31.548 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:31.548 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.548 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.548 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:31.548 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.548 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:31.548 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:31.548 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:31.548 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:31.548 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.548 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.548 nvme0n1 00:28:31.548 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.548 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.548 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.548 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.807 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.807 20:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI5MjQxOTRlZTMzYTdiYWZiOGYwY2Q0NGM0N2U4NGMhK2tm: 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI5MjQxOTRlZTMzYTdiYWZiOGYwY2Q0NGM0N2U4NGMhK2tm: 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: ]] 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.807 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:31.808 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:31.808 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:31.808 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:31.808 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.808 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.065 nvme0n1 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: ]] 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.065 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:32.322 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.322 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:32.322 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:32.322 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:32.322 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:32.322 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.322 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.580 nvme0n1 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: ]] 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.580 20:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.148 nvme0n1 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGE5NTE1NDhiMTc4YzA4N2Y2OGIzN2IyYWUxYWZlZTlkZGExZjM5ZDkyN2Q0NjdlpU+PgA==: 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGE5NTE1NDhiMTc4YzA4N2Y2OGIzN2IyYWUxYWZlZTlkZGExZjM5ZDkyN2Q0NjdlpU+PgA==: 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: ]] 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.148 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:33.149 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:33.149 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:33.149 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:33.149 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.149 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.407 nvme0n1 00:28:33.407 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.407 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.407 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.407 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.407 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.407 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDJjY2RmZTY2NmFkZDRiZmU3MTUxNDc3MGVhMGFkMTA2ODRjZjczNTg2N2Y4NGMzODgyZDU5NGU3MWYzMjgzY8q1Qjw=: 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDJjY2RmZTY2NmFkZDRiZmU3MTUxNDc3MGVhMGFkMTA2ODRjZjczNTg2N2Y4NGMzODgyZDU5NGU3MWYzMjgzY8q1Qjw=: 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.408 20:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.975 nvme0n1 00:28:33.975 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.975 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI5MjQxOTRlZTMzYTdiYWZiOGYwY2Q0NGM0N2U4NGMhK2tm: 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI5MjQxOTRlZTMzYTdiYWZiOGYwY2Q0NGM0N2U4NGMhK2tm: 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: ]] 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWVmZjFiYWU0NWZmY2IyZGVmMDFkZmQ0NTMxYzFhNDAyNGQyNjNmMGMzMGNjZGNkM2I1NGY3ODFlNzVmMWM2YbYTHx8=: 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.976 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.544 nvme0n1 00:28:34.544 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.544 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: ]] 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.545 20:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.113 nvme0n1 00:28:35.113 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.113 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.113 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.113 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.113 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.113 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.113 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.113 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.113 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.113 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.113 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.113 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.113 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:35.113 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.113 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:35.113 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:35.113 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:35.113 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:35.113 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:35.113 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:35.113 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:35.113 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:35.113 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: ]] 00:28:35.113 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:35.113 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:35.113 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.113 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:35.113 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:35.113 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:35.114 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.114 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:35.114 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.114 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.114 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.114 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.114 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.114 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.114 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.114 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.114 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.114 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.114 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.114 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.114 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.114 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.114 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:35.114 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.114 20:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.682 nvme0n1 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGE5NTE1NDhiMTc4YzA4N2Y2OGIzN2IyYWUxYWZlZTlkZGExZjM5ZDkyN2Q0NjdlpU+PgA==: 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGE5NTE1NDhiMTc4YzA4N2Y2OGIzN2IyYWUxYWZlZTlkZGExZjM5ZDkyN2Q0NjdlpU+PgA==: 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: ]] 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2NjNDQ3MWQ3M2EzMTUxZWNmNmIzOWYyYjRjMjY5ZTg4NmNb: 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.682 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.249 nvme0n1 00:28:36.249 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.249 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.249 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.249 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.249 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.249 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.249 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.249 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.249 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.249 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.507 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDJjY2RmZTY2NmFkZDRiZmU3MTUxNDc3MGVhMGFkMTA2ODRjZjczNTg2N2Y4NGMzODgyZDU5NGU3MWYzMjgzY8q1Qjw=: 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDJjY2RmZTY2NmFkZDRiZmU3MTUxNDc3MGVhMGFkMTA2ODRjZjczNTg2N2Y4NGMzODgyZDU5NGU3MWYzMjgzY8q1Qjw=: 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.508 20:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.076 nvme0n1 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: ]] 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.076 request: 00:28:37.076 { 00:28:37.076 "name": "nvme0", 00:28:37.076 "trtype": "tcp", 00:28:37.076 "traddr": "10.0.0.1", 00:28:37.076 "adrfam": "ipv4", 00:28:37.076 "trsvcid": "4420", 00:28:37.076 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:37.076 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:37.076 "prchk_reftag": false, 00:28:37.076 "prchk_guard": false, 00:28:37.076 "hdgst": false, 00:28:37.076 "ddgst": false, 00:28:37.076 "allow_unrecognized_csi": false, 00:28:37.076 "method": "bdev_nvme_attach_controller", 00:28:37.076 "req_id": 1 00:28:37.076 } 00:28:37.076 Got JSON-RPC error response 00:28:37.076 response: 00:28:37.076 { 00:28:37.076 "code": -5, 00:28:37.076 "message": "Input/output error" 00:28:37.076 } 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:37.076 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.077 request: 00:28:37.077 { 00:28:37.077 "name": "nvme0", 00:28:37.077 "trtype": "tcp", 00:28:37.077 "traddr": "10.0.0.1", 00:28:37.077 "adrfam": "ipv4", 00:28:37.077 "trsvcid": "4420", 00:28:37.077 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:37.077 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:37.077 "prchk_reftag": false, 00:28:37.077 "prchk_guard": false, 00:28:37.077 "hdgst": false, 00:28:37.077 "ddgst": false, 00:28:37.077 "dhchap_key": "key2", 00:28:37.077 "allow_unrecognized_csi": false, 00:28:37.077 "method": "bdev_nvme_attach_controller", 00:28:37.077 "req_id": 1 00:28:37.077 } 00:28:37.077 Got JSON-RPC error response 00:28:37.077 response: 00:28:37.077 { 00:28:37.077 "code": -5, 00:28:37.077 "message": "Input/output error" 00:28:37.077 } 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.077 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.336 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.336 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.337 request: 00:28:37.337 { 00:28:37.337 "name": "nvme0", 00:28:37.337 "trtype": "tcp", 00:28:37.337 "traddr": "10.0.0.1", 00:28:37.337 "adrfam": "ipv4", 00:28:37.337 "trsvcid": "4420", 00:28:37.337 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:37.337 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:37.337 "prchk_reftag": false, 00:28:37.337 "prchk_guard": false, 00:28:37.337 "hdgst": false, 00:28:37.337 "ddgst": false, 00:28:37.337 "dhchap_key": "key1", 00:28:37.337 "dhchap_ctrlr_key": "ckey2", 00:28:37.337 "allow_unrecognized_csi": false, 00:28:37.337 "method": "bdev_nvme_attach_controller", 00:28:37.337 "req_id": 1 00:28:37.337 } 00:28:37.337 Got JSON-RPC error response 00:28:37.337 response: 00:28:37.337 { 00:28:37.337 "code": -5, 00:28:37.337 "message": "Input/output error" 00:28:37.337 } 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.337 nvme0n1 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: ]] 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.337 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.598 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.598 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.598 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:28:37.598 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.598 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.598 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.598 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.598 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:37.598 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:37.598 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:37.598 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:37.598 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.598 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:37.599 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.599 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:37.599 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.599 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.599 request: 00:28:37.599 { 00:28:37.599 "name": "nvme0", 00:28:37.599 "dhchap_key": "key1", 00:28:37.599 "dhchap_ctrlr_key": "ckey2", 00:28:37.599 "method": "bdev_nvme_set_keys", 00:28:37.599 "req_id": 1 00:28:37.599 } 00:28:37.599 Got JSON-RPC error response 00:28:37.599 response: 00:28:37.599 { 00:28:37.599 "code": -13, 00:28:37.599 "message": "Permission denied" 00:28:37.599 } 00:28:37.599 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:37.599 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:37.599 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:37.599 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:37.599 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:37.599 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.599 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:37.599 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.599 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.599 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.599 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:37.599 20:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:38.976 20:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.976 20:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:38.976 20:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.976 20:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.976 20:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.976 20:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:38.976 20:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:39.912 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.912 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:39.912 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.912 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.912 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.912 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:28:39.912 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:39.912 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.912 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:39.912 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:39.912 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:39.912 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:39.912 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:39.912 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:39.912 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:39.912 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg3ZWQ1YThhYmYzY2M2YmRjNzYzN2M3N2Q2ZTU5NGQyMDlmMTM2YjFjNzY2Y2Fjn6B38g==: 00:28:39.912 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: ]] 00:28:39.912 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjRjOWYxODk3MzNiYzM2NDY4YjcyNzY3OTFjNTI4ZTJjYmU4ZTM2NmExYTM5MGU1qbabgA==: 00:28:39.912 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:28:39.912 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:39.912 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:39.912 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:39.912 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.912 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.912 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.913 nvme0n1 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxMWE1MDQzY2ZiY2YzNzZkMTkyNTc1MmQ4MTcyMznI22ML: 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: ]] 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NkM2U1ZGQxODgxNDNjZWMyOTk2NjNmMWIxYTU1NWEz7GAM: 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.913 request: 00:28:39.913 { 00:28:39.913 "name": "nvme0", 00:28:39.913 "dhchap_key": "key2", 00:28:39.913 "dhchap_ctrlr_key": "ckey1", 00:28:39.913 "method": "bdev_nvme_set_keys", 00:28:39.913 "req_id": 1 00:28:39.913 } 00:28:39.913 Got JSON-RPC error response 00:28:39.913 response: 00:28:39.913 { 00:28:39.913 "code": -13, 00:28:39.913 "message": "Permission denied" 00:28:39.913 } 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.913 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.172 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:40.172 20:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:41.107 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.107 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:41.107 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.107 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.107 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.107 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:28:41.107 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:28:41.107 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:28:41.107 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:41.107 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:41.107 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:28:41.107 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:41.107 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:28:41.107 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:41.107 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:41.107 rmmod nvme_tcp 00:28:41.107 rmmod nvme_fabrics 00:28:41.107 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:41.107 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:28:41.107 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:28:41.107 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 504894 ']' 00:28:41.107 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 504894 00:28:41.108 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 504894 ']' 00:28:41.108 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 504894 00:28:41.108 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:28:41.108 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:41.108 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 504894 00:28:41.108 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:41.108 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:41.108 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 504894' 00:28:41.108 killing process with pid 504894 00:28:41.108 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 504894 00:28:41.108 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 504894 00:28:41.367 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:41.367 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:41.367 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:41.367 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:28:41.367 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:28:41.367 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:41.367 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:41.367 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:41.367 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:41.367 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.367 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:41.367 20:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.898 20:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:43.898 20:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:43.898 20:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:43.898 20:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:43.898 20:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:43.898 20:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:28:43.898 20:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:43.898 20:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:43.898 20:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:43.898 20:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:43.898 20:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:43.898 20:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:43.898 20:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:46.431 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:46.431 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:46.431 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:46.431 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:46.431 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:46.431 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:46.431 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:46.431 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:46.431 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:46.431 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:46.431 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:46.431 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:46.431 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:46.431 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:46.431 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:46.431 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:47.368 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:28:47.368 20:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.kcZ /tmp/spdk.key-null.NzN /tmp/spdk.key-sha256.Dwx /tmp/spdk.key-sha384.ZaU /tmp/spdk.key-sha512.kE6 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:47.368 20:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:50.661 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:28:50.661 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:50.661 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:28:50.661 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:28:50.661 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:28:50.661 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:28:50.661 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:28:50.661 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:28:50.661 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:28:50.661 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:28:50.661 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:28:50.661 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:28:50.661 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:28:50.661 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:28:50.661 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:28:50.661 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:28:50.661 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:28:50.661 00:28:50.661 real 0m55.015s 00:28:50.661 user 0m49.655s 00:28:50.661 sys 0m12.698s 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.661 ************************************ 00:28:50.661 END TEST nvmf_auth_host 00:28:50.661 ************************************ 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.661 ************************************ 00:28:50.661 START TEST nvmf_digest 00:28:50.661 ************************************ 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:50.661 * Looking for test storage... 00:28:50.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:50.661 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:50.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.662 --rc genhtml_branch_coverage=1 00:28:50.662 --rc genhtml_function_coverage=1 00:28:50.662 --rc genhtml_legend=1 00:28:50.662 --rc geninfo_all_blocks=1 00:28:50.662 --rc geninfo_unexecuted_blocks=1 00:28:50.662 00:28:50.662 ' 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:50.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.662 --rc genhtml_branch_coverage=1 00:28:50.662 --rc genhtml_function_coverage=1 00:28:50.662 --rc genhtml_legend=1 00:28:50.662 --rc geninfo_all_blocks=1 00:28:50.662 --rc geninfo_unexecuted_blocks=1 00:28:50.662 00:28:50.662 ' 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:50.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.662 --rc genhtml_branch_coverage=1 00:28:50.662 --rc genhtml_function_coverage=1 00:28:50.662 --rc genhtml_legend=1 00:28:50.662 --rc geninfo_all_blocks=1 00:28:50.662 --rc geninfo_unexecuted_blocks=1 00:28:50.662 00:28:50.662 ' 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:50.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.662 --rc genhtml_branch_coverage=1 00:28:50.662 --rc genhtml_function_coverage=1 00:28:50.662 --rc genhtml_legend=1 00:28:50.662 --rc geninfo_all_blocks=1 00:28:50.662 --rc geninfo_unexecuted_blocks=1 00:28:50.662 00:28:50.662 ' 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:50.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:50.662 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:50.663 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:50.663 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:50.663 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:50.663 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.663 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:50.663 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.663 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:50.663 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:50.663 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:28:50.663 20:48:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:57.230 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:57.230 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:28:57.230 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:57.230 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:57.230 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:57.230 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:57.230 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:57.230 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:28:57.230 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:57.230 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:28:57.230 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:28:57.230 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:28:57.230 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:28:57.230 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:28:57.230 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:28:57.230 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:57.230 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:57.230 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:57.230 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:57.230 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:57.231 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:57.231 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:57.231 Found net devices under 0000:af:00.0: cvl_0_0 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:57.231 Found net devices under 0000:af:00.1: cvl_0_1 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:57.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:57.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.421 ms 00:28:57.231 00:28:57.231 --- 10.0.0.2 ping statistics --- 00:28:57.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:57.231 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:57.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:57.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:28:57.231 00:28:57.231 --- 10.0.0.1 ping statistics --- 00:28:57.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:57.231 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:57.231 ************************************ 00:28:57.231 START TEST nvmf_digest_clean 00:28:57.231 ************************************ 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:57.231 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:57.232 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:57.232 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:57.232 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:57.232 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:57.232 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=520084 00:28:57.232 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 520084 00:28:57.232 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:57.232 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 520084 ']' 00:28:57.232 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:57.232 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:57.232 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:57.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:57.232 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:57.232 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:57.232 [2024-12-05 20:48:49.950832] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:28:57.232 [2024-12-05 20:48:49.950871] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:57.232 [2024-12-05 20:48:50.029424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.232 [2024-12-05 20:48:50.076157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:57.232 [2024-12-05 20:48:50.076195] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:57.232 [2024-12-05 20:48:50.076202] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:57.232 [2024-12-05 20:48:50.076208] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:57.232 [2024-12-05 20:48:50.076212] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:57.232 [2024-12-05 20:48:50.076754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:57.490 20:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:57.490 20:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:57.490 20:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:57.490 20:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:57.490 20:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:57.490 20:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:57.490 20:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:57.490 20:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:57.490 20:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:57.491 20:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.491 20:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:57.491 null0 00:28:57.491 [2024-12-05 20:48:50.894096] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:57.491 [2024-12-05 20:48:50.918292] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:57.491 20:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.491 20:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:57.491 20:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:57.491 20:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:57.491 20:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:57.491 20:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:57.491 20:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:57.491 20:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:57.491 20:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=520330 00:28:57.491 20:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 520330 /var/tmp/bperf.sock 00:28:57.491 20:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:57.491 20:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 520330 ']' 00:28:57.491 20:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:57.491 20:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:57.491 20:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:57.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:57.491 20:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:57.491 20:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:57.750 [2024-12-05 20:48:50.971860] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:28:57.750 [2024-12-05 20:48:50.971902] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid520330 ] 00:28:57.750 [2024-12-05 20:48:51.044338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.750 [2024-12-05 20:48:51.083134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:57.750 20:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:57.750 20:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:57.750 20:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:57.750 20:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:57.750 20:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:58.009 20:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:58.009 20:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:58.576 nvme0n1 00:28:58.576 20:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:58.576 20:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:58.576 Running I/O for 2 seconds... 00:29:00.451 26290.00 IOPS, 102.70 MiB/s [2024-12-05T19:48:53.892Z] 26487.00 IOPS, 103.46 MiB/s 00:29:00.451 Latency(us) 00:29:00.451 [2024-12-05T19:48:53.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.451 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:00.451 nvme0n1 : 2.00 26489.37 103.47 0.00 0.00 4827.74 2383.13 14120.03 00:29:00.451 [2024-12-05T19:48:53.892Z] =================================================================================================================== 00:29:00.451 [2024-12-05T19:48:53.892Z] Total : 26489.37 103.47 0.00 0.00 4827.74 2383.13 14120.03 00:29:00.451 { 00:29:00.451 "results": [ 00:29:00.451 { 00:29:00.451 "job": "nvme0n1", 00:29:00.451 "core_mask": "0x2", 00:29:00.451 "workload": "randread", 00:29:00.451 "status": "finished", 00:29:00.451 "queue_depth": 128, 00:29:00.451 "io_size": 4096, 00:29:00.451 "runtime": 2.004653, 00:29:00.451 "iops": 26489.37247493706, 00:29:00.451 "mibps": 103.47411123022289, 00:29:00.451 "io_failed": 0, 00:29:00.451 "io_timeout": 0, 00:29:00.451 "avg_latency_us": 4827.743140234403, 00:29:00.451 "min_latency_us": 2383.1272727272726, 00:29:00.451 "max_latency_us": 14120.02909090909 00:29:00.451 } 00:29:00.451 ], 00:29:00.451 "core_count": 1 00:29:00.451 } 00:29:00.451 20:48:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:00.451 20:48:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:00.451 20:48:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:00.451 20:48:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:00.451 | select(.opcode=="crc32c") 00:29:00.451 | "\(.module_name) \(.executed)"' 00:29:00.451 20:48:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:00.710 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:00.710 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:00.710 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:00.710 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:00.710 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 520330 00:29:00.710 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 520330 ']' 00:29:00.710 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 520330 00:29:00.710 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:00.710 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:00.710 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 520330 00:29:00.710 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:00.710 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:00.710 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 520330' 00:29:00.710 killing process with pid 520330 00:29:00.710 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 520330 00:29:00.710 Received shutdown signal, test time was about 2.000000 seconds 00:29:00.710 00:29:00.710 Latency(us) 00:29:00.710 [2024-12-05T19:48:54.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.710 [2024-12-05T19:48:54.151Z] =================================================================================================================== 00:29:00.710 [2024-12-05T19:48:54.151Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:00.710 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 520330 00:29:00.969 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:00.969 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:00.969 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:00.969 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:00.969 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:00.969 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:00.969 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:00.969 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=520865 00:29:00.969 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 520865 /var/tmp/bperf.sock 00:29:00.969 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:00.969 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 520865 ']' 00:29:00.969 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:00.969 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:00.969 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:00.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:00.969 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:00.969 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:00.969 [2024-12-05 20:48:54.273719] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:29:00.969 [2024-12-05 20:48:54.273762] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid520865 ] 00:29:00.969 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:00.969 Zero copy mechanism will not be used. 00:29:00.969 [2024-12-05 20:48:54.336579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.969 [2024-12-05 20:48:54.371037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.228 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:01.228 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:01.228 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:01.228 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:01.228 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:01.487 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:01.487 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:01.746 nvme0n1 00:29:01.746 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:01.746 20:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:01.746 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:01.747 Zero copy mechanism will not be used. 00:29:01.747 Running I/O for 2 seconds... 00:29:04.061 6532.00 IOPS, 816.50 MiB/s [2024-12-05T19:48:57.502Z] 6483.50 IOPS, 810.44 MiB/s 00:29:04.061 Latency(us) 00:29:04.061 [2024-12-05T19:48:57.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.061 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:04.061 nvme0n1 : 2.00 6482.50 810.31 0.00 0.00 2465.64 614.40 4170.47 00:29:04.061 [2024-12-05T19:48:57.502Z] =================================================================================================================== 00:29:04.061 [2024-12-05T19:48:57.502Z] Total : 6482.50 810.31 0.00 0.00 2465.64 614.40 4170.47 00:29:04.061 { 00:29:04.061 "results": [ 00:29:04.061 { 00:29:04.061 "job": "nvme0n1", 00:29:04.061 "core_mask": "0x2", 00:29:04.061 "workload": "randread", 00:29:04.061 "status": "finished", 00:29:04.061 "queue_depth": 16, 00:29:04.061 "io_size": 131072, 00:29:04.061 "runtime": 2.002778, 00:29:04.061 "iops": 6482.495813315305, 00:29:04.061 "mibps": 810.3119766644131, 00:29:04.061 "io_failed": 0, 00:29:04.061 "io_timeout": 0, 00:29:04.061 "avg_latency_us": 2465.638815513994, 00:29:04.061 "min_latency_us": 614.4, 00:29:04.061 "max_latency_us": 4170.472727272727 00:29:04.061 } 00:29:04.061 ], 00:29:04.061 "core_count": 1 00:29:04.061 } 00:29:04.061 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:04.061 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:04.061 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:04.061 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:04.061 | select(.opcode=="crc32c") 00:29:04.061 | "\(.module_name) \(.executed)"' 00:29:04.061 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:04.061 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:04.061 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:04.061 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:04.061 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:04.061 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 520865 00:29:04.061 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 520865 ']' 00:29:04.061 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 520865 00:29:04.061 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:04.061 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:04.061 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 520865 00:29:04.061 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:04.061 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:04.061 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 520865' 00:29:04.061 killing process with pid 520865 00:29:04.061 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 520865 00:29:04.061 Received shutdown signal, test time was about 2.000000 seconds 00:29:04.061 00:29:04.061 Latency(us) 00:29:04.061 [2024-12-05T19:48:57.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.061 [2024-12-05T19:48:57.502Z] =================================================================================================================== 00:29:04.062 [2024-12-05T19:48:57.503Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:04.062 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 520865 00:29:04.062 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:04.062 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:04.062 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:04.062 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:04.062 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:04.062 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:04.062 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:04.062 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=521483 00:29:04.062 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 521483 /var/tmp/bperf.sock 00:29:04.062 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:04.062 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 521483 ']' 00:29:04.062 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:04.321 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:04.321 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:04.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:04.321 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:04.321 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:04.321 [2024-12-05 20:48:57.543258] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:29:04.321 [2024-12-05 20:48:57.543302] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid521483 ] 00:29:04.321 [2024-12-05 20:48:57.619156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.321 [2024-12-05 20:48:57.658290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.321 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:04.321 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:04.321 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:04.321 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:04.321 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:04.579 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:04.579 20:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:04.838 nvme0n1 00:29:04.838 20:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:04.839 20:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:04.839 Running I/O for 2 seconds... 00:29:07.151 31242.00 IOPS, 122.04 MiB/s [2024-12-05T19:49:00.592Z] 31205.00 IOPS, 121.89 MiB/s 00:29:07.151 Latency(us) 00:29:07.151 [2024-12-05T19:49:00.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.151 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:07.151 nvme0n1 : 2.01 31208.98 121.91 0.00 0.00 4095.51 1630.95 12809.31 00:29:07.151 [2024-12-05T19:49:00.592Z] =================================================================================================================== 00:29:07.151 [2024-12-05T19:49:00.592Z] Total : 31208.98 121.91 0.00 0.00 4095.51 1630.95 12809.31 00:29:07.151 { 00:29:07.151 "results": [ 00:29:07.151 { 00:29:07.151 "job": "nvme0n1", 00:29:07.151 "core_mask": "0x2", 00:29:07.151 "workload": "randwrite", 00:29:07.151 "status": "finished", 00:29:07.151 "queue_depth": 128, 00:29:07.151 "io_size": 4096, 00:29:07.151 "runtime": 2.005929, 00:29:07.151 "iops": 31208.980975896953, 00:29:07.151 "mibps": 121.91008193709747, 00:29:07.151 "io_failed": 0, 00:29:07.151 "io_timeout": 0, 00:29:07.151 "avg_latency_us": 4095.5063146843095, 00:29:07.151 "min_latency_us": 1630.9527272727273, 00:29:07.151 "max_latency_us": 12809.309090909092 00:29:07.151 } 00:29:07.151 ], 00:29:07.151 "core_count": 1 00:29:07.151 } 00:29:07.151 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:07.151 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:07.151 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:07.151 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:07.151 | select(.opcode=="crc32c") 00:29:07.151 | "\(.module_name) \(.executed)"' 00:29:07.151 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:07.151 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:07.151 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:07.151 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:07.151 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:07.151 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 521483 00:29:07.151 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 521483 ']' 00:29:07.151 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 521483 00:29:07.151 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:07.151 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:07.151 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 521483 00:29:07.151 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:07.151 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:07.151 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 521483' 00:29:07.151 killing process with pid 521483 00:29:07.151 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 521483 00:29:07.151 Received shutdown signal, test time was about 2.000000 seconds 00:29:07.151 00:29:07.151 Latency(us) 00:29:07.151 [2024-12-05T19:49:00.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.151 [2024-12-05T19:49:00.592Z] =================================================================================================================== 00:29:07.151 [2024-12-05T19:49:00.592Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:07.151 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 521483 00:29:07.410 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:07.410 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:07.410 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:07.410 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:07.410 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:07.410 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:07.410 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:07.410 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=522106 00:29:07.410 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 522106 /var/tmp/bperf.sock 00:29:07.410 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:07.410 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 522106 ']' 00:29:07.410 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:07.410 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:07.410 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:07.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:07.410 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:07.410 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:07.410 [2024-12-05 20:49:00.724625] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:29:07.410 [2024-12-05 20:49:00.724670] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid522106 ] 00:29:07.410 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:07.410 Zero copy mechanism will not be used. 00:29:07.410 [2024-12-05 20:49:00.798692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.410 [2024-12-05 20:49:00.834262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.700 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:07.700 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:07.700 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:07.700 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:07.700 20:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:07.700 20:49:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:07.700 20:49:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:07.957 nvme0n1 00:29:07.957 20:49:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:07.957 20:49:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:08.215 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:08.215 Zero copy mechanism will not be used. 00:29:08.215 Running I/O for 2 seconds... 00:29:10.085 6614.00 IOPS, 826.75 MiB/s [2024-12-05T19:49:03.526Z] 6424.50 IOPS, 803.06 MiB/s 00:29:10.085 Latency(us) 00:29:10.085 [2024-12-05T19:49:03.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.085 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:10.085 nvme0n1 : 2.00 6421.54 802.69 0.00 0.00 2487.72 1817.13 12213.53 00:29:10.085 [2024-12-05T19:49:03.526Z] =================================================================================================================== 00:29:10.085 [2024-12-05T19:49:03.526Z] Total : 6421.54 802.69 0.00 0.00 2487.72 1817.13 12213.53 00:29:10.085 { 00:29:10.085 "results": [ 00:29:10.085 { 00:29:10.085 "job": "nvme0n1", 00:29:10.085 "core_mask": "0x2", 00:29:10.085 "workload": "randwrite", 00:29:10.085 "status": "finished", 00:29:10.085 "queue_depth": 16, 00:29:10.085 "io_size": 131072, 00:29:10.085 "runtime": 2.003725, 00:29:10.085 "iops": 6421.539881969831, 00:29:10.085 "mibps": 802.6924852462289, 00:29:10.085 "io_failed": 0, 00:29:10.085 "io_timeout": 0, 00:29:10.085 "avg_latency_us": 2487.7239201056964, 00:29:10.085 "min_latency_us": 1817.1345454545456, 00:29:10.085 "max_latency_us": 12213.527272727273 00:29:10.085 } 00:29:10.085 ], 00:29:10.085 "core_count": 1 00:29:10.085 } 00:29:10.085 20:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:10.085 20:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:10.085 20:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:10.085 20:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:10.085 | select(.opcode=="crc32c") 00:29:10.085 | "\(.module_name) \(.executed)"' 00:29:10.085 20:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:10.344 20:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:10.344 20:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:10.344 20:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:10.344 20:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:10.344 20:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 522106 00:29:10.344 20:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 522106 ']' 00:29:10.344 20:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 522106 00:29:10.344 20:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:10.344 20:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:10.344 20:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 522106 00:29:10.344 20:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:10.344 20:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:10.344 20:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 522106' 00:29:10.344 killing process with pid 522106 00:29:10.344 20:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 522106 00:29:10.344 Received shutdown signal, test time was about 2.000000 seconds 00:29:10.344 00:29:10.344 Latency(us) 00:29:10.344 [2024-12-05T19:49:03.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.344 [2024-12-05T19:49:03.785Z] =================================================================================================================== 00:29:10.344 [2024-12-05T19:49:03.785Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:10.344 20:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 522106 00:29:10.604 20:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 520084 00:29:10.604 20:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 520084 ']' 00:29:10.604 20:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 520084 00:29:10.604 20:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:10.604 20:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:10.604 20:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 520084 00:29:10.604 20:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:10.604 20:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:10.604 20:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 520084' 00:29:10.604 killing process with pid 520084 00:29:10.604 20:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 520084 00:29:10.604 20:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 520084 00:29:10.863 00:29:10.863 real 0m14.209s 00:29:10.863 user 0m26.463s 00:29:10.863 sys 0m4.613s 00:29:10.863 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:10.863 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:10.863 ************************************ 00:29:10.863 END TEST nvmf_digest_clean 00:29:10.863 ************************************ 00:29:10.863 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:10.863 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:10.863 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:10.863 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:10.863 ************************************ 00:29:10.863 START TEST nvmf_digest_error 00:29:10.863 ************************************ 00:29:10.863 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:29:10.863 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:10.863 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:10.863 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:10.863 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:10.863 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=522749 00:29:10.863 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 522749 00:29:10.863 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:10.863 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 522749 ']' 00:29:10.863 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.863 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:10.863 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.863 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:10.863 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:10.863 [2024-12-05 20:49:04.237595] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:29:10.863 [2024-12-05 20:49:04.237631] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:11.123 [2024-12-05 20:49:04.311107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.123 [2024-12-05 20:49:04.348440] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:11.123 [2024-12-05 20:49:04.348473] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:11.123 [2024-12-05 20:49:04.348480] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:11.123 [2024-12-05 20:49:04.348485] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:11.123 [2024-12-05 20:49:04.348490] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:11.123 [2024-12-05 20:49:04.349045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.123 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:11.123 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:11.123 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:11.123 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:11.123 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:11.123 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:11.123 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:11.123 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.123 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:11.123 [2024-12-05 20:49:04.413461] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:11.123 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.123 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:11.123 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:11.123 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.123 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:11.123 null0 00:29:11.123 [2024-12-05 20:49:04.503653] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:11.123 [2024-12-05 20:49:04.527845] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:11.123 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.123 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:11.123 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:11.123 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:11.123 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:11.123 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:11.123 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=522778 00:29:11.123 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 522778 /var/tmp/bperf.sock 00:29:11.123 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:11.123 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 522778 ']' 00:29:11.123 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:11.123 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:11.123 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:11.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:11.123 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:11.123 20:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:11.382 [2024-12-05 20:49:04.581164] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:29:11.382 [2024-12-05 20:49:04.581201] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid522778 ] 00:29:11.382 [2024-12-05 20:49:04.653823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.382 [2024-12-05 20:49:04.691204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.949 20:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:11.949 20:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:11.949 20:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:11.949 20:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:12.207 20:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:12.207 20:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.207 20:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:12.207 20:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.207 20:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:12.207 20:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:12.466 nvme0n1 00:29:12.466 20:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:12.466 20:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.466 20:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:12.466 20:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.466 20:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:12.466 20:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:12.725 Running I/O for 2 seconds... 00:29:12.725 [2024-12-05 20:49:05.960103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.725 [2024-12-05 20:49:05.960134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.726 [2024-12-05 20:49:05.960144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.726 [2024-12-05 20:49:05.971133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.726 [2024-12-05 20:49:05.971155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.726 [2024-12-05 20:49:05.971164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.726 [2024-12-05 20:49:05.978665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.726 [2024-12-05 20:49:05.978685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.726 [2024-12-05 20:49:05.978692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.726 [2024-12-05 20:49:05.989354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.726 [2024-12-05 20:49:05.989373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.726 [2024-12-05 20:49:05.989381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.726 [2024-12-05 20:49:06.001193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.726 [2024-12-05 20:49:06.001211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.726 [2024-12-05 20:49:06.001219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.726 [2024-12-05 20:49:06.011207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.726 [2024-12-05 20:49:06.011225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.726 [2024-12-05 20:49:06.011232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.726 [2024-12-05 20:49:06.019052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.726 [2024-12-05 20:49:06.019075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.726 [2024-12-05 20:49:06.019083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.726 [2024-12-05 20:49:06.029264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.726 [2024-12-05 20:49:06.029283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.726 [2024-12-05 20:49:06.029294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.726 [2024-12-05 20:49:06.040615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.726 [2024-12-05 20:49:06.040633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.726 [2024-12-05 20:49:06.040640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.726 [2024-12-05 20:49:06.051395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.726 [2024-12-05 20:49:06.051413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.726 [2024-12-05 20:49:06.051421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.726 [2024-12-05 20:49:06.062739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.726 [2024-12-05 20:49:06.062757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.726 [2024-12-05 20:49:06.062764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.726 [2024-12-05 20:49:06.074250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.726 [2024-12-05 20:49:06.074269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.726 [2024-12-05 20:49:06.074276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.726 [2024-12-05 20:49:06.081666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.726 [2024-12-05 20:49:06.081682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.726 [2024-12-05 20:49:06.081689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.726 [2024-12-05 20:49:06.092292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.726 [2024-12-05 20:49:06.092309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.726 [2024-12-05 20:49:06.092317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.726 [2024-12-05 20:49:06.103145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.726 [2024-12-05 20:49:06.103162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.726 [2024-12-05 20:49:06.103169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.726 [2024-12-05 20:49:06.111010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.726 [2024-12-05 20:49:06.111027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.726 [2024-12-05 20:49:06.111034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.726 [2024-12-05 20:49:06.121801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.726 [2024-12-05 20:49:06.121819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.726 [2024-12-05 20:49:06.121826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.726 [2024-12-05 20:49:06.132426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.726 [2024-12-05 20:49:06.132444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.726 [2024-12-05 20:49:06.132451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.726 [2024-12-05 20:49:06.139388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.726 [2024-12-05 20:49:06.139404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.726 [2024-12-05 20:49:06.139411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.726 [2024-12-05 20:49:06.148184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.726 [2024-12-05 20:49:06.148201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.726 [2024-12-05 20:49:06.148208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.726 [2024-12-05 20:49:06.157938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.726 [2024-12-05 20:49:06.157956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.726 [2024-12-05 20:49:06.157963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.986 [2024-12-05 20:49:06.166369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.986 [2024-12-05 20:49:06.166387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.986 [2024-12-05 20:49:06.166394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.986 [2024-12-05 20:49:06.174406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.986 [2024-12-05 20:49:06.174423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.986 [2024-12-05 20:49:06.174430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.986 [2024-12-05 20:49:06.183346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.986 [2024-12-05 20:49:06.183364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.986 [2024-12-05 20:49:06.183371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.986 [2024-12-05 20:49:06.191600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.986 [2024-12-05 20:49:06.191618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.986 [2024-12-05 20:49:06.191628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.986 [2024-12-05 20:49:06.201161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.986 [2024-12-05 20:49:06.201179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.986 [2024-12-05 20:49:06.201185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.986 [2024-12-05 20:49:06.209463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.986 [2024-12-05 20:49:06.209480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.986 [2024-12-05 20:49:06.209487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.986 [2024-12-05 20:49:06.217229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.986 [2024-12-05 20:49:06.217248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.986 [2024-12-05 20:49:06.217254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.986 [2024-12-05 20:49:06.229107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.986 [2024-12-05 20:49:06.229124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.986 [2024-12-05 20:49:06.229131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.986 [2024-12-05 20:49:06.239567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.986 [2024-12-05 20:49:06.239585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.986 [2024-12-05 20:49:06.239592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.986 [2024-12-05 20:49:06.248926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.986 [2024-12-05 20:49:06.248943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.986 [2024-12-05 20:49:06.248950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.986 [2024-12-05 20:49:06.257930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.986 [2024-12-05 20:49:06.257947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.986 [2024-12-05 20:49:06.257954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.986 [2024-12-05 20:49:06.269288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.986 [2024-12-05 20:49:06.269306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.986 [2024-12-05 20:49:06.269313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.986 [2024-12-05 20:49:06.277552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.986 [2024-12-05 20:49:06.277573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.986 [2024-12-05 20:49:06.277580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.986 [2024-12-05 20:49:06.288942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.986 [2024-12-05 20:49:06.288961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.987 [2024-12-05 20:49:06.288968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.987 [2024-12-05 20:49:06.299568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.987 [2024-12-05 20:49:06.299586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.987 [2024-12-05 20:49:06.299593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.987 [2024-12-05 20:49:06.307473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.987 [2024-12-05 20:49:06.307490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.987 [2024-12-05 20:49:06.307497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.987 [2024-12-05 20:49:06.318818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.987 [2024-12-05 20:49:06.318836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.987 [2024-12-05 20:49:06.318843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.987 [2024-12-05 20:49:06.326508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.987 [2024-12-05 20:49:06.326525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.987 [2024-12-05 20:49:06.326532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.987 [2024-12-05 20:49:06.337368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.987 [2024-12-05 20:49:06.337385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.987 [2024-12-05 20:49:06.337392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.987 [2024-12-05 20:49:06.348141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.987 [2024-12-05 20:49:06.348159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.987 [2024-12-05 20:49:06.348166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.987 [2024-12-05 20:49:06.359956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.987 [2024-12-05 20:49:06.359974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.987 [2024-12-05 20:49:06.359981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.987 [2024-12-05 20:49:06.369629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.987 [2024-12-05 20:49:06.369645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.987 [2024-12-05 20:49:06.369652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.987 [2024-12-05 20:49:06.377260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.987 [2024-12-05 20:49:06.377278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.987 [2024-12-05 20:49:06.377285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.987 [2024-12-05 20:49:06.386694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.987 [2024-12-05 20:49:06.386714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.987 [2024-12-05 20:49:06.386721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.987 [2024-12-05 20:49:06.396612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.987 [2024-12-05 20:49:06.396630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.987 [2024-12-05 20:49:06.396637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.987 [2024-12-05 20:49:06.404432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.987 [2024-12-05 20:49:06.404450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.987 [2024-12-05 20:49:06.404458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.987 [2024-12-05 20:49:06.413518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.987 [2024-12-05 20:49:06.413537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.987 [2024-12-05 20:49:06.413545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.987 [2024-12-05 20:49:06.424794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:12.987 [2024-12-05 20:49:06.424814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.987 [2024-12-05 20:49:06.424822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.246 [2024-12-05 20:49:06.432313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.246 [2024-12-05 20:49:06.432331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.246 [2024-12-05 20:49:06.432339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.246 [2024-12-05 20:49:06.443368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.246 [2024-12-05 20:49:06.443388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.247 [2024-12-05 20:49:06.443398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.247 [2024-12-05 20:49:06.455251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.247 [2024-12-05 20:49:06.455269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.247 [2024-12-05 20:49:06.455277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.247 [2024-12-05 20:49:06.462500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.247 [2024-12-05 20:49:06.462518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.247 [2024-12-05 20:49:06.462525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.247 [2024-12-05 20:49:06.473471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.247 [2024-12-05 20:49:06.473489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.247 [2024-12-05 20:49:06.473496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.247 [2024-12-05 20:49:06.485187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.247 [2024-12-05 20:49:06.485205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.247 [2024-12-05 20:49:06.485211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.247 [2024-12-05 20:49:06.496427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.247 [2024-12-05 20:49:06.496446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.247 [2024-12-05 20:49:06.496453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.247 [2024-12-05 20:49:06.506781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.247 [2024-12-05 20:49:06.506799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.247 [2024-12-05 20:49:06.506807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.247 [2024-12-05 20:49:06.517564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.247 [2024-12-05 20:49:06.517582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.247 [2024-12-05 20:49:06.517589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.247 [2024-12-05 20:49:06.525175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.247 [2024-12-05 20:49:06.525194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.247 [2024-12-05 20:49:06.525201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.247 [2024-12-05 20:49:06.534384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.247 [2024-12-05 20:49:06.534404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.247 [2024-12-05 20:49:06.534411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.247 [2024-12-05 20:49:06.542712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.247 [2024-12-05 20:49:06.542730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.247 [2024-12-05 20:49:06.542737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.247 [2024-12-05 20:49:06.551037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.247 [2024-12-05 20:49:06.551055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.247 [2024-12-05 20:49:06.551069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.247 [2024-12-05 20:49:06.561364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.247 [2024-12-05 20:49:06.561383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.247 [2024-12-05 20:49:06.561391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.247 [2024-12-05 20:49:06.570343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.247 [2024-12-05 20:49:06.570362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.247 [2024-12-05 20:49:06.570370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.247 [2024-12-05 20:49:06.578740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.247 [2024-12-05 20:49:06.578758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.247 [2024-12-05 20:49:06.578765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.247 [2024-12-05 20:49:06.586821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.247 [2024-12-05 20:49:06.586840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.247 [2024-12-05 20:49:06.586847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.247 [2024-12-05 20:49:06.594866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.247 [2024-12-05 20:49:06.594884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.247 [2024-12-05 20:49:06.594892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.247 [2024-12-05 20:49:06.602793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.247 [2024-12-05 20:49:06.602811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.247 [2024-12-05 20:49:06.602818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.247 [2024-12-05 20:49:06.611662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.247 [2024-12-05 20:49:06.611680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.247 [2024-12-05 20:49:06.611688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.247 [2024-12-05 20:49:06.623573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.247 [2024-12-05 20:49:06.623604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.247 [2024-12-05 20:49:06.623612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.247 [2024-12-05 20:49:06.632543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.247 [2024-12-05 20:49:06.632562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.247 [2024-12-05 20:49:06.632569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.247 [2024-12-05 20:49:06.640732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.247 [2024-12-05 20:49:06.640751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.247 [2024-12-05 20:49:06.640758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.247 [2024-12-05 20:49:06.650026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.247 [2024-12-05 20:49:06.650044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.247 [2024-12-05 20:49:06.650051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.247 [2024-12-05 20:49:06.658618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.247 [2024-12-05 20:49:06.658637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.247 [2024-12-05 20:49:06.658644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.247 [2024-12-05 20:49:06.666214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.247 [2024-12-05 20:49:06.666231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.247 [2024-12-05 20:49:06.666238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.248 [2024-12-05 20:49:06.674765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.248 [2024-12-05 20:49:06.674783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.248 [2024-12-05 20:49:06.674790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.248 [2024-12-05 20:49:06.683385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.248 [2024-12-05 20:49:06.683404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.248 [2024-12-05 20:49:06.683414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.507 [2024-12-05 20:49:06.693710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.507 [2024-12-05 20:49:06.693728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.507 [2024-12-05 20:49:06.693735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.507 [2024-12-05 20:49:06.701489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.507 [2024-12-05 20:49:06.701507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.507 [2024-12-05 20:49:06.701514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.507 [2024-12-05 20:49:06.712962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.507 [2024-12-05 20:49:06.712980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.507 [2024-12-05 20:49:06.712987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.507 [2024-12-05 20:49:06.722994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.507 [2024-12-05 20:49:06.723012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.507 [2024-12-05 20:49:06.723019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.507 [2024-12-05 20:49:06.731404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.507 [2024-12-05 20:49:06.731423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.507 [2024-12-05 20:49:06.731430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.507 [2024-12-05 20:49:06.739267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.507 [2024-12-05 20:49:06.739285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.507 [2024-12-05 20:49:06.739292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.507 [2024-12-05 20:49:06.749405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.507 [2024-12-05 20:49:06.749423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.507 [2024-12-05 20:49:06.749430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.507 [2024-12-05 20:49:06.757756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.507 [2024-12-05 20:49:06.757773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.507 [2024-12-05 20:49:06.757780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.507 [2024-12-05 20:49:06.767408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.507 [2024-12-05 20:49:06.767429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.507 [2024-12-05 20:49:06.767436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.507 [2024-12-05 20:49:06.775118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.507 [2024-12-05 20:49:06.775135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.507 [2024-12-05 20:49:06.775141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.507 [2024-12-05 20:49:06.784132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.507 [2024-12-05 20:49:06.784151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.507 [2024-12-05 20:49:06.784158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.507 [2024-12-05 20:49:06.793508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.507 [2024-12-05 20:49:06.793526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.507 [2024-12-05 20:49:06.793533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.507 [2024-12-05 20:49:06.801936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.508 [2024-12-05 20:49:06.801955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.508 [2024-12-05 20:49:06.801963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.508 [2024-12-05 20:49:06.811839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.508 [2024-12-05 20:49:06.811857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.508 [2024-12-05 20:49:06.811864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.508 [2024-12-05 20:49:06.820773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.508 [2024-12-05 20:49:06.820791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.508 [2024-12-05 20:49:06.820799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.508 [2024-12-05 20:49:06.828819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.508 [2024-12-05 20:49:06.828837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.508 [2024-12-05 20:49:06.828844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.508 [2024-12-05 20:49:06.839178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.508 [2024-12-05 20:49:06.839196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.508 [2024-12-05 20:49:06.839203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.508 [2024-12-05 20:49:06.850298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.508 [2024-12-05 20:49:06.850317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.508 [2024-12-05 20:49:06.850324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.508 [2024-12-05 20:49:06.860842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.508 [2024-12-05 20:49:06.860860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.508 [2024-12-05 20:49:06.860867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.508 [2024-12-05 20:49:06.871995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.508 [2024-12-05 20:49:06.872012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.508 [2024-12-05 20:49:06.872019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.508 [2024-12-05 20:49:06.879811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.508 [2024-12-05 20:49:06.879829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.508 [2024-12-05 20:49:06.879836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.508 [2024-12-05 20:49:06.890085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.508 [2024-12-05 20:49:06.890104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.508 [2024-12-05 20:49:06.890111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.508 [2024-12-05 20:49:06.897640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.508 [2024-12-05 20:49:06.897657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.508 [2024-12-05 20:49:06.897664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.508 [2024-12-05 20:49:06.908422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.508 [2024-12-05 20:49:06.908440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.508 [2024-12-05 20:49:06.908447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.508 [2024-12-05 20:49:06.918232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.508 [2024-12-05 20:49:06.918249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.508 [2024-12-05 20:49:06.918256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.508 [2024-12-05 20:49:06.926211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.508 [2024-12-05 20:49:06.926228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.508 [2024-12-05 20:49:06.926238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.508 [2024-12-05 20:49:06.934570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.508 [2024-12-05 20:49:06.934587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.508 [2024-12-05 20:49:06.934594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.508 [2024-12-05 20:49:06.945049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.508 [2024-12-05 20:49:06.945072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.508 [2024-12-05 20:49:06.945080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.769 26857.00 IOPS, 104.91 MiB/s [2024-12-05T19:49:07.210Z] [2024-12-05 20:49:06.953677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.769 [2024-12-05 20:49:06.953695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.769 [2024-12-05 20:49:06.953702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.769 [2024-12-05 20:49:06.962578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.769 [2024-12-05 20:49:06.962596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.769 [2024-12-05 20:49:06.962603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.769 [2024-12-05 20:49:06.970148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.769 [2024-12-05 20:49:06.970166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.769 [2024-12-05 20:49:06.970173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.769 [2024-12-05 20:49:06.980024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.769 [2024-12-05 20:49:06.980041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.769 [2024-12-05 20:49:06.980049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.769 [2024-12-05 20:49:06.986963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.769 [2024-12-05 20:49:06.986981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.769 [2024-12-05 20:49:06.986988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.769 [2024-12-05 20:49:06.997718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.769 [2024-12-05 20:49:06.997736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.769 [2024-12-05 20:49:06.997743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.769 [2024-12-05 20:49:07.009686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.769 [2024-12-05 20:49:07.009704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.769 [2024-12-05 20:49:07.009711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.769 [2024-12-05 20:49:07.018821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.769 [2024-12-05 20:49:07.018839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.769 [2024-12-05 20:49:07.018846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.769 [2024-12-05 20:49:07.027035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.769 [2024-12-05 20:49:07.027052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.769 [2024-12-05 20:49:07.027065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.769 [2024-12-05 20:49:07.035512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.769 [2024-12-05 20:49:07.035529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.769 [2024-12-05 20:49:07.035536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.769 [2024-12-05 20:49:07.043815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.769 [2024-12-05 20:49:07.043832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.769 [2024-12-05 20:49:07.043840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.769 [2024-12-05 20:49:07.053237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.769 [2024-12-05 20:49:07.053255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.769 [2024-12-05 20:49:07.053262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.769 [2024-12-05 20:49:07.063674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.769 [2024-12-05 20:49:07.063692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.769 [2024-12-05 20:49:07.063699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.769 [2024-12-05 20:49:07.074718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.769 [2024-12-05 20:49:07.074736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.769 [2024-12-05 20:49:07.074743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.769 [2024-12-05 20:49:07.082607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.769 [2024-12-05 20:49:07.082624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.769 [2024-12-05 20:49:07.082634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.769 [2024-12-05 20:49:07.091477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.769 [2024-12-05 20:49:07.091494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.769 [2024-12-05 20:49:07.091501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.769 [2024-12-05 20:49:07.101607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.769 [2024-12-05 20:49:07.101625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.769 [2024-12-05 20:49:07.101632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.769 [2024-12-05 20:49:07.111002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.769 [2024-12-05 20:49:07.111018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.769 [2024-12-05 20:49:07.111025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.769 [2024-12-05 20:49:07.120414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.769 [2024-12-05 20:49:07.120431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.770 [2024-12-05 20:49:07.120438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.770 [2024-12-05 20:49:07.128065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.770 [2024-12-05 20:49:07.128083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.770 [2024-12-05 20:49:07.128090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.770 [2024-12-05 20:49:07.137557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.770 [2024-12-05 20:49:07.137575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.770 [2024-12-05 20:49:07.137582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.770 [2024-12-05 20:49:07.147039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.770 [2024-12-05 20:49:07.147062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.770 [2024-12-05 20:49:07.147070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.770 [2024-12-05 20:49:07.154166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.770 [2024-12-05 20:49:07.154183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.770 [2024-12-05 20:49:07.154190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.770 [2024-12-05 20:49:07.165035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.770 [2024-12-05 20:49:07.165056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.770 [2024-12-05 20:49:07.165069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.770 [2024-12-05 20:49:07.175119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.770 [2024-12-05 20:49:07.175137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.770 [2024-12-05 20:49:07.175144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.770 [2024-12-05 20:49:07.183007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.770 [2024-12-05 20:49:07.183024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.770 [2024-12-05 20:49:07.183031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.770 [2024-12-05 20:49:07.191540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.770 [2024-12-05 20:49:07.191557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.770 [2024-12-05 20:49:07.191564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.770 [2024-12-05 20:49:07.200950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:13.770 [2024-12-05 20:49:07.200967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.770 [2024-12-05 20:49:07.200974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.030 [2024-12-05 20:49:07.208425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.030 [2024-12-05 20:49:07.208443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.030 [2024-12-05 20:49:07.208451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.030 [2024-12-05 20:49:07.219856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.030 [2024-12-05 20:49:07.219873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.030 [2024-12-05 20:49:07.219881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.030 [2024-12-05 20:49:07.227548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.030 [2024-12-05 20:49:07.227565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.030 [2024-12-05 20:49:07.227572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.030 [2024-12-05 20:49:07.238759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.030 [2024-12-05 20:49:07.238777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.030 [2024-12-05 20:49:07.238784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.030 [2024-12-05 20:49:07.250042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.030 [2024-12-05 20:49:07.250065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.030 [2024-12-05 20:49:07.250073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.030 [2024-12-05 20:49:07.257463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.030 [2024-12-05 20:49:07.257480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.030 [2024-12-05 20:49:07.257488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.030 [2024-12-05 20:49:07.268517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.030 [2024-12-05 20:49:07.268535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.030 [2024-12-05 20:49:07.268542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.030 [2024-12-05 20:49:07.276219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.030 [2024-12-05 20:49:07.276237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.030 [2024-12-05 20:49:07.276244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.030 [2024-12-05 20:49:07.287532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.030 [2024-12-05 20:49:07.287550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.030 [2024-12-05 20:49:07.287558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.030 [2024-12-05 20:49:07.298519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.030 [2024-12-05 20:49:07.298537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.030 [2024-12-05 20:49:07.298545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.030 [2024-12-05 20:49:07.309338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.030 [2024-12-05 20:49:07.309356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.030 [2024-12-05 20:49:07.309363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.030 [2024-12-05 20:49:07.321784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.030 [2024-12-05 20:49:07.321802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.030 [2024-12-05 20:49:07.321809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.030 [2024-12-05 20:49:07.329330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.030 [2024-12-05 20:49:07.329347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.030 [2024-12-05 20:49:07.329357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.030 [2024-12-05 20:49:07.340571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.030 [2024-12-05 20:49:07.340589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.030 [2024-12-05 20:49:07.340596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.030 [2024-12-05 20:49:07.351401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.030 [2024-12-05 20:49:07.351418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.030 [2024-12-05 20:49:07.351425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.030 [2024-12-05 20:49:07.359789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.030 [2024-12-05 20:49:07.359806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.030 [2024-12-05 20:49:07.359813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.030 [2024-12-05 20:49:07.370930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.030 [2024-12-05 20:49:07.370946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.030 [2024-12-05 20:49:07.370953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.030 [2024-12-05 20:49:07.378346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.030 [2024-12-05 20:49:07.378364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.030 [2024-12-05 20:49:07.378371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.030 [2024-12-05 20:49:07.388642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.030 [2024-12-05 20:49:07.388660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.030 [2024-12-05 20:49:07.388666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.031 [2024-12-05 20:49:07.399747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.031 [2024-12-05 20:49:07.399765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.031 [2024-12-05 20:49:07.399772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.031 [2024-12-05 20:49:07.410886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.031 [2024-12-05 20:49:07.410903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.031 [2024-12-05 20:49:07.410910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.031 [2024-12-05 20:49:07.421793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.031 [2024-12-05 20:49:07.421811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.031 [2024-12-05 20:49:07.421819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.031 [2024-12-05 20:49:07.429595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.031 [2024-12-05 20:49:07.429612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.031 [2024-12-05 20:49:07.429619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.031 [2024-12-05 20:49:07.440696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.031 [2024-12-05 20:49:07.440715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.031 [2024-12-05 20:49:07.440722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.031 [2024-12-05 20:49:07.451578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.031 [2024-12-05 20:49:07.451596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.031 [2024-12-05 20:49:07.451602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.031 [2024-12-05 20:49:07.460000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.031 [2024-12-05 20:49:07.460018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.031 [2024-12-05 20:49:07.460025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.290 [2024-12-05 20:49:07.471295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.290 [2024-12-05 20:49:07.471313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.290 [2024-12-05 20:49:07.471321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.290 [2024-12-05 20:49:07.482104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.290 [2024-12-05 20:49:07.482121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.290 [2024-12-05 20:49:07.482128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.290 [2024-12-05 20:49:07.490029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.290 [2024-12-05 20:49:07.490046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.290 [2024-12-05 20:49:07.490054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.290 [2024-12-05 20:49:07.502079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.290 [2024-12-05 20:49:07.502097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.290 [2024-12-05 20:49:07.502107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.291 [2024-12-05 20:49:07.512192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.291 [2024-12-05 20:49:07.512210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.291 [2024-12-05 20:49:07.512217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.291 [2024-12-05 20:49:07.524527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.291 [2024-12-05 20:49:07.524544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.291 [2024-12-05 20:49:07.524551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.291 [2024-12-05 20:49:07.533753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.291 [2024-12-05 20:49:07.533771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.291 [2024-12-05 20:49:07.533778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.291 [2024-12-05 20:49:07.541217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.291 [2024-12-05 20:49:07.541235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.291 [2024-12-05 20:49:07.541241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.291 [2024-12-05 20:49:07.550388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.291 [2024-12-05 20:49:07.550407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.291 [2024-12-05 20:49:07.550414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.291 [2024-12-05 20:49:07.561287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.291 [2024-12-05 20:49:07.561304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.291 [2024-12-05 20:49:07.561311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.291 [2024-12-05 20:49:07.572810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.291 [2024-12-05 20:49:07.572827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.291 [2024-12-05 20:49:07.572834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.291 [2024-12-05 20:49:07.582156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.291 [2024-12-05 20:49:07.582176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.291 [2024-12-05 20:49:07.582183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.291 [2024-12-05 20:49:07.590716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.291 [2024-12-05 20:49:07.590742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.291 [2024-12-05 20:49:07.590749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.291 [2024-12-05 20:49:07.601345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.291 [2024-12-05 20:49:07.601364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.291 [2024-12-05 20:49:07.601371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.291 [2024-12-05 20:49:07.609066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.291 [2024-12-05 20:49:07.609083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.291 [2024-12-05 20:49:07.609090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.291 [2024-12-05 20:49:07.619619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.291 [2024-12-05 20:49:07.619637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.291 [2024-12-05 20:49:07.619645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.291 [2024-12-05 20:49:07.631495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.291 [2024-12-05 20:49:07.631513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.291 [2024-12-05 20:49:07.631521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.291 [2024-12-05 20:49:07.639148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.291 [2024-12-05 20:49:07.639166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.291 [2024-12-05 20:49:07.639173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.291 [2024-12-05 20:49:07.649939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.291 [2024-12-05 20:49:07.649957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.291 [2024-12-05 20:49:07.649964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.291 [2024-12-05 20:49:07.660831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.291 [2024-12-05 20:49:07.660849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.291 [2024-12-05 20:49:07.660857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.291 [2024-12-05 20:49:07.668467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.291 [2024-12-05 20:49:07.668485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.291 [2024-12-05 20:49:07.668492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.291 [2024-12-05 20:49:07.678905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.291 [2024-12-05 20:49:07.678923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.291 [2024-12-05 20:49:07.678931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.291 [2024-12-05 20:49:07.687067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.291 [2024-12-05 20:49:07.687085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.291 [2024-12-05 20:49:07.687092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.291 [2024-12-05 20:49:07.696786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.291 [2024-12-05 20:49:07.696805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.291 [2024-12-05 20:49:07.696812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.291 [2024-12-05 20:49:07.708019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.291 [2024-12-05 20:49:07.708036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.291 [2024-12-05 20:49:07.708043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.291 [2024-12-05 20:49:07.715929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.291 [2024-12-05 20:49:07.715948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.291 [2024-12-05 20:49:07.715955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.291 [2024-12-05 20:49:07.727995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.291 [2024-12-05 20:49:07.728013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.291 [2024-12-05 20:49:07.728020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.551 [2024-12-05 20:49:07.737661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.552 [2024-12-05 20:49:07.737679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.552 [2024-12-05 20:49:07.737686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.552 [2024-12-05 20:49:07.749014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.552 [2024-12-05 20:49:07.749032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.552 [2024-12-05 20:49:07.749039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.552 [2024-12-05 20:49:07.759337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.552 [2024-12-05 20:49:07.759355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.552 [2024-12-05 20:49:07.759365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.552 [2024-12-05 20:49:07.767856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.552 [2024-12-05 20:49:07.767874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.552 [2024-12-05 20:49:07.767882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.552 [2024-12-05 20:49:07.775675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.552 [2024-12-05 20:49:07.775693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.552 [2024-12-05 20:49:07.775700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.552 [2024-12-05 20:49:07.784067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.552 [2024-12-05 20:49:07.784084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.552 [2024-12-05 20:49:07.784091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.552 [2024-12-05 20:49:07.792184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.552 [2024-12-05 20:49:07.792201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.552 [2024-12-05 20:49:07.792209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.552 [2024-12-05 20:49:07.802043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.552 [2024-12-05 20:49:07.802067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.552 [2024-12-05 20:49:07.802075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.552 [2024-12-05 20:49:07.810539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.552 [2024-12-05 20:49:07.810557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.552 [2024-12-05 20:49:07.810563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.552 [2024-12-05 20:49:07.818795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.552 [2024-12-05 20:49:07.818813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.552 [2024-12-05 20:49:07.818820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.552 [2024-12-05 20:49:07.827768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.552 [2024-12-05 20:49:07.827786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.552 [2024-12-05 20:49:07.827793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.552 [2024-12-05 20:49:07.836642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.552 [2024-12-05 20:49:07.836662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.552 [2024-12-05 20:49:07.836669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.552 [2024-12-05 20:49:07.845025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.552 [2024-12-05 20:49:07.845042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.552 [2024-12-05 20:49:07.845049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.552 [2024-12-05 20:49:07.853290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.552 [2024-12-05 20:49:07.853307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.552 [2024-12-05 20:49:07.853314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.552 [2024-12-05 20:49:07.861255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.552 [2024-12-05 20:49:07.861273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.552 [2024-12-05 20:49:07.861280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.552 [2024-12-05 20:49:07.869721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.552 [2024-12-05 20:49:07.869738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.552 [2024-12-05 20:49:07.869745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.552 [2024-12-05 20:49:07.878358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.552 [2024-12-05 20:49:07.878375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.552 [2024-12-05 20:49:07.878382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.552 [2024-12-05 20:49:07.887384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.552 [2024-12-05 20:49:07.887404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.552 [2024-12-05 20:49:07.887411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.552 [2024-12-05 20:49:07.898110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.552 [2024-12-05 20:49:07.898129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.552 [2024-12-05 20:49:07.898136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.552 [2024-12-05 20:49:07.905939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.552 [2024-12-05 20:49:07.905956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.552 [2024-12-05 20:49:07.905963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.552 [2024-12-05 20:49:07.914046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.552 [2024-12-05 20:49:07.914071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.552 [2024-12-05 20:49:07.914078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.552 [2024-12-05 20:49:07.923513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.552 [2024-12-05 20:49:07.923532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.552 [2024-12-05 20:49:07.923540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.552 [2024-12-05 20:49:07.931972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.552 [2024-12-05 20:49:07.931990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.552 [2024-12-05 20:49:07.931997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.553 [2024-12-05 20:49:07.940514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.553 [2024-12-05 20:49:07.940533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.553 [2024-12-05 20:49:07.940540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.553 26875.50 IOPS, 104.98 MiB/s [2024-12-05T19:49:07.994Z] [2024-12-05 20:49:07.951663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85b9a0) 00:29:14.553 [2024-12-05 20:49:07.951682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.553 [2024-12-05 20:49:07.951690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.553 00:29:14.553 Latency(us) 00:29:14.553 [2024-12-05T19:49:07.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.553 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:14.553 nvme0n1 : 2.00 26892.64 105.05 0.00 0.00 4754.66 2115.03 16205.27 00:29:14.553 [2024-12-05T19:49:07.994Z] =================================================================================================================== 00:29:14.553 [2024-12-05T19:49:07.994Z] Total : 26892.64 105.05 0.00 0.00 4754.66 2115.03 16205.27 00:29:14.553 { 00:29:14.553 "results": [ 00:29:14.553 { 00:29:14.553 "job": "nvme0n1", 00:29:14.553 "core_mask": "0x2", 00:29:14.553 "workload": "randread", 00:29:14.553 "status": "finished", 00:29:14.553 "queue_depth": 128, 00:29:14.553 "io_size": 4096, 00:29:14.553 "runtime": 2.003485, 00:29:14.553 "iops": 26892.639575539622, 00:29:14.553 "mibps": 105.04937334195165, 00:29:14.553 "io_failed": 0, 00:29:14.553 "io_timeout": 0, 00:29:14.553 "avg_latency_us": 4754.66295716496, 00:29:14.553 "min_latency_us": 2115.0254545454545, 00:29:14.553 "max_latency_us": 16205.265454545455 00:29:14.553 } 00:29:14.553 ], 00:29:14.553 "core_count": 1 00:29:14.553 } 00:29:14.553 20:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:14.553 20:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:14.553 20:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:14.553 | .driver_specific 00:29:14.553 | .nvme_error 00:29:14.553 | .status_code 00:29:14.553 | .command_transient_transport_error' 00:29:14.553 20:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:14.813 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 211 > 0 )) 00:29:14.813 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 522778 00:29:14.813 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 522778 ']' 00:29:14.813 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 522778 00:29:14.813 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:14.813 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:14.813 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 522778 00:29:14.813 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:14.813 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:14.813 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 522778' 00:29:14.813 killing process with pid 522778 00:29:14.813 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 522778 00:29:14.813 Received shutdown signal, test time was about 2.000000 seconds 00:29:14.813 00:29:14.813 Latency(us) 00:29:14.813 [2024-12-05T19:49:08.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.813 [2024-12-05T19:49:08.254Z] =================================================================================================================== 00:29:14.813 [2024-12-05T19:49:08.254Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:14.813 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 522778 00:29:15.071 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:15.071 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:15.071 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:15.071 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:15.072 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:15.072 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=523546 00:29:15.072 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 523546 /var/tmp/bperf.sock 00:29:15.072 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:15.072 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 523546 ']' 00:29:15.072 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:15.072 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:15.072 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:15.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:15.072 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:15.072 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:15.072 [2024-12-05 20:49:08.391595] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:29:15.072 [2024-12-05 20:49:08.391645] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid523546 ] 00:29:15.072 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:15.072 Zero copy mechanism will not be used. 00:29:15.072 [2024-12-05 20:49:08.465538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.072 [2024-12-05 20:49:08.504213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:15.330 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:15.330 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:15.330 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:15.330 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:15.330 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:15.330 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.330 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:15.330 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.330 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:15.330 20:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:15.899 nvme0n1 00:29:15.899 20:49:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:15.899 20:49:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.899 20:49:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:15.899 20:49:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.899 20:49:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:15.899 20:49:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:15.899 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:15.899 Zero copy mechanism will not be used. 00:29:15.899 Running I/O for 2 seconds... 00:29:15.899 [2024-12-05 20:49:09.257588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:15.899 [2024-12-05 20:49:09.257622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-12-05 20:49:09.257632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:15.899 [2024-12-05 20:49:09.262780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:15.899 [2024-12-05 20:49:09.262804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-12-05 20:49:09.262813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:15.899 [2024-12-05 20:49:09.267865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:15.899 [2024-12-05 20:49:09.267890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-12-05 20:49:09.267898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:15.899 [2024-12-05 20:49:09.272914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:15.899 [2024-12-05 20:49:09.272935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-12-05 20:49:09.272943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:15.899 [2024-12-05 20:49:09.278131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:15.899 [2024-12-05 20:49:09.278152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-12-05 20:49:09.278160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:15.899 [2024-12-05 20:49:09.283753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:15.899 [2024-12-05 20:49:09.283775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-12-05 20:49:09.283784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:15.899 [2024-12-05 20:49:09.289198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:15.899 [2024-12-05 20:49:09.289219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-12-05 20:49:09.289227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:15.899 [2024-12-05 20:49:09.294591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:15.899 [2024-12-05 20:49:09.294612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-12-05 20:49:09.294620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:15.899 [2024-12-05 20:49:09.299912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:15.899 [2024-12-05 20:49:09.299932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-12-05 20:49:09.299940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:15.899 [2024-12-05 20:49:09.305243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:15.899 [2024-12-05 20:49:09.305264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-12-05 20:49:09.305271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:15.899 [2024-12-05 20:49:09.310259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:15.899 [2024-12-05 20:49:09.310280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-12-05 20:49:09.310287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:15.899 [2024-12-05 20:49:09.315226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:15.899 [2024-12-05 20:49:09.315246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-12-05 20:49:09.315254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:15.899 [2024-12-05 20:49:09.320342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:15.899 [2024-12-05 20:49:09.320363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-12-05 20:49:09.320370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:15.899 [2024-12-05 20:49:09.325268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:15.899 [2024-12-05 20:49:09.325288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-12-05 20:49:09.325295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:15.899 [2024-12-05 20:49:09.330272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:15.899 [2024-12-05 20:49:09.330291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-12-05 20:49:09.330298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:15.899 [2024-12-05 20:49:09.335192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:15.899 [2024-12-05 20:49:09.335213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.899 [2024-12-05 20:49:09.335220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.160 [2024-12-05 20:49:09.340404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.160 [2024-12-05 20:49:09.340423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.160 [2024-12-05 20:49:09.340430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.160 [2024-12-05 20:49:09.345609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.160 [2024-12-05 20:49:09.345633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.160 [2024-12-05 20:49:09.345640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.160 [2024-12-05 20:49:09.350837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.160 [2024-12-05 20:49:09.350857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.160 [2024-12-05 20:49:09.350864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.160 [2024-12-05 20:49:09.356092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.160 [2024-12-05 20:49:09.356112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.160 [2024-12-05 20:49:09.356123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.160 [2024-12-05 20:49:09.361217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.160 [2024-12-05 20:49:09.361237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.160 [2024-12-05 20:49:09.361244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.160 [2024-12-05 20:49:09.364432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.160 [2024-12-05 20:49:09.364451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.160 [2024-12-05 20:49:09.364457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.160 [2024-12-05 20:49:09.368563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.160 [2024-12-05 20:49:09.368583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.160 [2024-12-05 20:49:09.368590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.160 [2024-12-05 20:49:09.373682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.160 [2024-12-05 20:49:09.373702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.160 [2024-12-05 20:49:09.373709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.160 [2024-12-05 20:49:09.378498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.160 [2024-12-05 20:49:09.378518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.160 [2024-12-05 20:49:09.378525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.160 [2024-12-05 20:49:09.384213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.160 [2024-12-05 20:49:09.384233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.160 [2024-12-05 20:49:09.384240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.160 [2024-12-05 20:49:09.389489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.160 [2024-12-05 20:49:09.389509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.160 [2024-12-05 20:49:09.389516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.160 [2024-12-05 20:49:09.394528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.160 [2024-12-05 20:49:09.394548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.160 [2024-12-05 20:49:09.394555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.160 [2024-12-05 20:49:09.399617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.160 [2024-12-05 20:49:09.399640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.160 [2024-12-05 20:49:09.399647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.160 [2024-12-05 20:49:09.404689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.160 [2024-12-05 20:49:09.404709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.160 [2024-12-05 20:49:09.404716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.160 [2024-12-05 20:49:09.409722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.160 [2024-12-05 20:49:09.409742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.160 [2024-12-05 20:49:09.409750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.160 [2024-12-05 20:49:09.414787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.160 [2024-12-05 20:49:09.414807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.160 [2024-12-05 20:49:09.414814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.160 [2024-12-05 20:49:09.419777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.160 [2024-12-05 20:49:09.419797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.160 [2024-12-05 20:49:09.419805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.161 [2024-12-05 20:49:09.424705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.161 [2024-12-05 20:49:09.424726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.161 [2024-12-05 20:49:09.424733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.161 [2024-12-05 20:49:09.429623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.161 [2024-12-05 20:49:09.429643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.161 [2024-12-05 20:49:09.429650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.161 [2024-12-05 20:49:09.434541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.161 [2024-12-05 20:49:09.434561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.161 [2024-12-05 20:49:09.434568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.161 [2024-12-05 20:49:09.439579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.161 [2024-12-05 20:49:09.439599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.161 [2024-12-05 20:49:09.439606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.161 [2024-12-05 20:49:09.444558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.161 [2024-12-05 20:49:09.444578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.161 [2024-12-05 20:49:09.444586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.161 [2024-12-05 20:49:09.449683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.161 [2024-12-05 20:49:09.449703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.161 [2024-12-05 20:49:09.449711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.161 [2024-12-05 20:49:09.454736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.161 [2024-12-05 20:49:09.454756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.161 [2024-12-05 20:49:09.454764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.161 [2024-12-05 20:49:09.459921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.161 [2024-12-05 20:49:09.459941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.161 [2024-12-05 20:49:09.459948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.161 [2024-12-05 20:49:09.465024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.161 [2024-12-05 20:49:09.465044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.161 [2024-12-05 20:49:09.465051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.161 [2024-12-05 20:49:09.469971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.161 [2024-12-05 20:49:09.469992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.161 [2024-12-05 20:49:09.469999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.161 [2024-12-05 20:49:09.474837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.161 [2024-12-05 20:49:09.474857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.161 [2024-12-05 20:49:09.474864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.161 [2024-12-05 20:49:09.479783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.161 [2024-12-05 20:49:09.479802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.161 [2024-12-05 20:49:09.479809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.161 [2024-12-05 20:49:09.484716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.161 [2024-12-05 20:49:09.484735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.161 [2024-12-05 20:49:09.484746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.161 [2024-12-05 20:49:09.489650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.161 [2024-12-05 20:49:09.489669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.161 [2024-12-05 20:49:09.489676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.161 [2024-12-05 20:49:09.494559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.161 [2024-12-05 20:49:09.494578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.161 [2024-12-05 20:49:09.494585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.161 [2024-12-05 20:49:09.499498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.161 [2024-12-05 20:49:09.499516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.161 [2024-12-05 20:49:09.499523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.161 [2024-12-05 20:49:09.504563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.161 [2024-12-05 20:49:09.504583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.161 [2024-12-05 20:49:09.504590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.161 [2024-12-05 20:49:09.509639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.161 [2024-12-05 20:49:09.509658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.161 [2024-12-05 20:49:09.509665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.161 [2024-12-05 20:49:09.514839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.161 [2024-12-05 20:49:09.514859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.161 [2024-12-05 20:49:09.514866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.161 [2024-12-05 20:49:09.519970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.161 [2024-12-05 20:49:09.519989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.161 [2024-12-05 20:49:09.519997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.161 [2024-12-05 20:49:09.525014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.161 [2024-12-05 20:49:09.525034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.161 [2024-12-05 20:49:09.525041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.161 [2024-12-05 20:49:09.530043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.161 [2024-12-05 20:49:09.530068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.161 [2024-12-05 20:49:09.530075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.161 [2024-12-05 20:49:09.535080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.161 [2024-12-05 20:49:09.535099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.161 [2024-12-05 20:49:09.535107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.161 [2024-12-05 20:49:09.540216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.161 [2024-12-05 20:49:09.540236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.161 [2024-12-05 20:49:09.540243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.161 [2024-12-05 20:49:09.545997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.161 [2024-12-05 20:49:09.546018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.161 [2024-12-05 20:49:09.546025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.161 [2024-12-05 20:49:09.552329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.162 [2024-12-05 20:49:09.552349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.162 [2024-12-05 20:49:09.552356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.162 [2024-12-05 20:49:09.558614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.162 [2024-12-05 20:49:09.558635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.162 [2024-12-05 20:49:09.558643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.162 [2024-12-05 20:49:09.565228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.162 [2024-12-05 20:49:09.565252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.162 [2024-12-05 20:49:09.565260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.162 [2024-12-05 20:49:09.571513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.162 [2024-12-05 20:49:09.571533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.162 [2024-12-05 20:49:09.571541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.162 [2024-12-05 20:49:09.576913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.162 [2024-12-05 20:49:09.576934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.162 [2024-12-05 20:49:09.576945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.162 [2024-12-05 20:49:09.584250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.162 [2024-12-05 20:49:09.584271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.162 [2024-12-05 20:49:09.584278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.162 [2024-12-05 20:49:09.589588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.162 [2024-12-05 20:49:09.589609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.162 [2024-12-05 20:49:09.589616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.162 [2024-12-05 20:49:09.594764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.162 [2024-12-05 20:49:09.594784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.162 [2024-12-05 20:49:09.594791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.422 [2024-12-05 20:49:09.600369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.422 [2024-12-05 20:49:09.600390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.422 [2024-12-05 20:49:09.600397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.422 [2024-12-05 20:49:09.607005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.422 [2024-12-05 20:49:09.607026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.422 [2024-12-05 20:49:09.607033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.422 [2024-12-05 20:49:09.614300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.422 [2024-12-05 20:49:09.614321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.422 [2024-12-05 20:49:09.614329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.422 [2024-12-05 20:49:09.620925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.422 [2024-12-05 20:49:09.620946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.422 [2024-12-05 20:49:09.620953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.422 [2024-12-05 20:49:09.627855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.422 [2024-12-05 20:49:09.627877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.422 [2024-12-05 20:49:09.627884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.422 [2024-12-05 20:49:09.635022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.422 [2024-12-05 20:49:09.635047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.422 [2024-12-05 20:49:09.635055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.422 [2024-12-05 20:49:09.643163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.422 [2024-12-05 20:49:09.643185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.422 [2024-12-05 20:49:09.643192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.422 [2024-12-05 20:49:09.650746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.422 [2024-12-05 20:49:09.650768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.422 [2024-12-05 20:49:09.650776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.422 [2024-12-05 20:49:09.657217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.422 [2024-12-05 20:49:09.657239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.422 [2024-12-05 20:49:09.657247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.422 [2024-12-05 20:49:09.662672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.422 [2024-12-05 20:49:09.662693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.422 [2024-12-05 20:49:09.662700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.422 [2024-12-05 20:49:09.667955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.422 [2024-12-05 20:49:09.667975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.422 [2024-12-05 20:49:09.667982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.422 [2024-12-05 20:49:09.673258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.422 [2024-12-05 20:49:09.673279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.422 [2024-12-05 20:49:09.673286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.422 [2024-12-05 20:49:09.678474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.422 [2024-12-05 20:49:09.678494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.422 [2024-12-05 20:49:09.678502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.422 [2024-12-05 20:49:09.683334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.422 [2024-12-05 20:49:09.683354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.422 [2024-12-05 20:49:09.683361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.422 [2024-12-05 20:49:09.688616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.422 [2024-12-05 20:49:09.688640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.422 [2024-12-05 20:49:09.688648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.422 [2024-12-05 20:49:09.693602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.422 [2024-12-05 20:49:09.693621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.422 [2024-12-05 20:49:09.693629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.422 [2024-12-05 20:49:09.698639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.423 [2024-12-05 20:49:09.698659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.423 [2024-12-05 20:49:09.698667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.423 [2024-12-05 20:49:09.703654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.423 [2024-12-05 20:49:09.703674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.423 [2024-12-05 20:49:09.703681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.423 [2024-12-05 20:49:09.708743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.423 [2024-12-05 20:49:09.708763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.423 [2024-12-05 20:49:09.708770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.423 [2024-12-05 20:49:09.713854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.423 [2024-12-05 20:49:09.713874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.423 [2024-12-05 20:49:09.713882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.423 [2024-12-05 20:49:09.718915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.423 [2024-12-05 20:49:09.718935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.423 [2024-12-05 20:49:09.718942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.423 [2024-12-05 20:49:09.724102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.423 [2024-12-05 20:49:09.724122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.423 [2024-12-05 20:49:09.724129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.423 [2024-12-05 20:49:09.729206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.423 [2024-12-05 20:49:09.729226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.423 [2024-12-05 20:49:09.729237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.423 [2024-12-05 20:49:09.734362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.423 [2024-12-05 20:49:09.734382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.423 [2024-12-05 20:49:09.734388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.423 [2024-12-05 20:49:09.739979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.423 [2024-12-05 20:49:09.740001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.423 [2024-12-05 20:49:09.740009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.423 [2024-12-05 20:49:09.745068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.423 [2024-12-05 20:49:09.745088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.423 [2024-12-05 20:49:09.745095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.423 [2024-12-05 20:49:09.750016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.423 [2024-12-05 20:49:09.750036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.423 [2024-12-05 20:49:09.750043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.423 [2024-12-05 20:49:09.755045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.423 [2024-12-05 20:49:09.755071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.423 [2024-12-05 20:49:09.755078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.423 [2024-12-05 20:49:09.760041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.423 [2024-12-05 20:49:09.760068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.423 [2024-12-05 20:49:09.760076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.423 [2024-12-05 20:49:09.765191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.423 [2024-12-05 20:49:09.765211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.423 [2024-12-05 20:49:09.765218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.423 [2024-12-05 20:49:09.770234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.423 [2024-12-05 20:49:09.770255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.423 [2024-12-05 20:49:09.770262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.423 [2024-12-05 20:49:09.776335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.423 [2024-12-05 20:49:09.776359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.423 [2024-12-05 20:49:09.776366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.423 [2024-12-05 20:49:09.781678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.423 [2024-12-05 20:49:09.781699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.423 [2024-12-05 20:49:09.781706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.423 [2024-12-05 20:49:09.786865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.423 [2024-12-05 20:49:09.786885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.423 [2024-12-05 20:49:09.786893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.423 [2024-12-05 20:49:09.792032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.423 [2024-12-05 20:49:09.792052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.423 [2024-12-05 20:49:09.792066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.423 [2024-12-05 20:49:09.797482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.423 [2024-12-05 20:49:09.797503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.423 [2024-12-05 20:49:09.797510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.423 [2024-12-05 20:49:09.802452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.423 [2024-12-05 20:49:09.802473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.423 [2024-12-05 20:49:09.802480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.423 [2024-12-05 20:49:09.807508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.423 [2024-12-05 20:49:09.807528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.423 [2024-12-05 20:49:09.807534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.423 [2024-12-05 20:49:09.812654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.423 [2024-12-05 20:49:09.812675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.423 [2024-12-05 20:49:09.812681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.423 [2024-12-05 20:49:09.817648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.423 [2024-12-05 20:49:09.817670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.423 [2024-12-05 20:49:09.817677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.423 [2024-12-05 20:49:09.822721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.423 [2024-12-05 20:49:09.822740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.423 [2024-12-05 20:49:09.822747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.423 [2024-12-05 20:49:09.827947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.424 [2024-12-05 20:49:09.827967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.424 [2024-12-05 20:49:09.827974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.424 [2024-12-05 20:49:09.831287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.424 [2024-12-05 20:49:09.831306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.424 [2024-12-05 20:49:09.831315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.424 [2024-12-05 20:49:09.835221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.424 [2024-12-05 20:49:09.835241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.424 [2024-12-05 20:49:09.835248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.424 [2024-12-05 20:49:09.840274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.424 [2024-12-05 20:49:09.840294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.424 [2024-12-05 20:49:09.840301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.424 [2024-12-05 20:49:09.845132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.424 [2024-12-05 20:49:09.845152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.424 [2024-12-05 20:49:09.845159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.424 [2024-12-05 20:49:09.850172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.424 [2024-12-05 20:49:09.850192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.424 [2024-12-05 20:49:09.850199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.424 [2024-12-05 20:49:09.855222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.424 [2024-12-05 20:49:09.855241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.424 [2024-12-05 20:49:09.855248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.424 [2024-12-05 20:49:09.860330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.424 [2024-12-05 20:49:09.860354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.424 [2024-12-05 20:49:09.860361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.684 [2024-12-05 20:49:09.865384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.684 [2024-12-05 20:49:09.865405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.684 [2024-12-05 20:49:09.865412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.684 [2024-12-05 20:49:09.870406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.684 [2024-12-05 20:49:09.870426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.684 [2024-12-05 20:49:09.870434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.685 [2024-12-05 20:49:09.875382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.685 [2024-12-05 20:49:09.875402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.685 [2024-12-05 20:49:09.875409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.685 [2024-12-05 20:49:09.880331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.685 [2024-12-05 20:49:09.880351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.685 [2024-12-05 20:49:09.880358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.685 [2024-12-05 20:49:09.885373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.685 [2024-12-05 20:49:09.885393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.685 [2024-12-05 20:49:09.885400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.685 [2024-12-05 20:49:09.890416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.685 [2024-12-05 20:49:09.890436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.685 [2024-12-05 20:49:09.890443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.685 [2024-12-05 20:49:09.895511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.685 [2024-12-05 20:49:09.895532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.685 [2024-12-05 20:49:09.895539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.685 [2024-12-05 20:49:09.900722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.685 [2024-12-05 20:49:09.900741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.685 [2024-12-05 20:49:09.900748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.685 [2024-12-05 20:49:09.906025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.685 [2024-12-05 20:49:09.906045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.685 [2024-12-05 20:49:09.906052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.685 [2024-12-05 20:49:09.911327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.685 [2024-12-05 20:49:09.911347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.685 [2024-12-05 20:49:09.911355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.685 [2024-12-05 20:49:09.916522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.685 [2024-12-05 20:49:09.916542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.685 [2024-12-05 20:49:09.916549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.685 [2024-12-05 20:49:09.921540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.685 [2024-12-05 20:49:09.921559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.685 [2024-12-05 20:49:09.921566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.685 [2024-12-05 20:49:09.926064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.685 [2024-12-05 20:49:09.926084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.685 [2024-12-05 20:49:09.926091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.685 [2024-12-05 20:49:09.931150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.685 [2024-12-05 20:49:09.931171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.685 [2024-12-05 20:49:09.931178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.685 [2024-12-05 20:49:09.936340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.685 [2024-12-05 20:49:09.936360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.685 [2024-12-05 20:49:09.936367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.685 [2024-12-05 20:49:09.942068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.685 [2024-12-05 20:49:09.942089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.685 [2024-12-05 20:49:09.942096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.685 [2024-12-05 20:49:09.946898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.685 [2024-12-05 20:49:09.946918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.685 [2024-12-05 20:49:09.946929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.685 [2024-12-05 20:49:09.952231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.685 [2024-12-05 20:49:09.952252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.685 [2024-12-05 20:49:09.952259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.685 [2024-12-05 20:49:09.957689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.685 [2024-12-05 20:49:09.957709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.685 [2024-12-05 20:49:09.957717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.685 [2024-12-05 20:49:09.962882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.685 [2024-12-05 20:49:09.962901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.685 [2024-12-05 20:49:09.962909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.685 [2024-12-05 20:49:09.967942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.685 [2024-12-05 20:49:09.967962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.685 [2024-12-05 20:49:09.967969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.685 [2024-12-05 20:49:09.973070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.685 [2024-12-05 20:49:09.973089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.685 [2024-12-05 20:49:09.973097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.685 [2024-12-05 20:49:09.978088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.685 [2024-12-05 20:49:09.978107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.685 [2024-12-05 20:49:09.978114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.685 [2024-12-05 20:49:09.983143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.685 [2024-12-05 20:49:09.983162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.685 [2024-12-05 20:49:09.983172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.685 [2024-12-05 20:49:09.987378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.685 [2024-12-05 20:49:09.987398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.685 [2024-12-05 20:49:09.987406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.685 [2024-12-05 20:49:09.992223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.685 [2024-12-05 20:49:09.992246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.685 [2024-12-05 20:49:09.992254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.685 [2024-12-05 20:49:09.997005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.685 [2024-12-05 20:49:09.997024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.686 [2024-12-05 20:49:09.997031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.686 [2024-12-05 20:49:10.001899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.686 [2024-12-05 20:49:10.001919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.686 [2024-12-05 20:49:10.001926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.686 [2024-12-05 20:49:10.006705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.686 [2024-12-05 20:49:10.006725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.686 [2024-12-05 20:49:10.006733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.686 [2024-12-05 20:49:10.011623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.686 [2024-12-05 20:49:10.011643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.686 [2024-12-05 20:49:10.011651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.686 [2024-12-05 20:49:10.016564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.686 [2024-12-05 20:49:10.016585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.686 [2024-12-05 20:49:10.016592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.686 [2024-12-05 20:49:10.022721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.686 [2024-12-05 20:49:10.022741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.686 [2024-12-05 20:49:10.022749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.686 [2024-12-05 20:49:10.027861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.686 [2024-12-05 20:49:10.027882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.686 [2024-12-05 20:49:10.027890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.686 [2024-12-05 20:49:10.033008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.686 [2024-12-05 20:49:10.033029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.686 [2024-12-05 20:49:10.033036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.686 [2024-12-05 20:49:10.038261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.686 [2024-12-05 20:49:10.038281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.686 [2024-12-05 20:49:10.038289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.686 [2024-12-05 20:49:10.044223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.686 [2024-12-05 20:49:10.044244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.686 [2024-12-05 20:49:10.044252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.686 [2024-12-05 20:49:10.049462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.686 [2024-12-05 20:49:10.049481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.686 [2024-12-05 20:49:10.049489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.686 [2024-12-05 20:49:10.055232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.686 [2024-12-05 20:49:10.055255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.686 [2024-12-05 20:49:10.055264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.686 [2024-12-05 20:49:10.060491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.686 [2024-12-05 20:49:10.060512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.686 [2024-12-05 20:49:10.060520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.686 [2024-12-05 20:49:10.065883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.686 [2024-12-05 20:49:10.065902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.686 [2024-12-05 20:49:10.065910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.686 [2024-12-05 20:49:10.071101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.686 [2024-12-05 20:49:10.071121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.686 [2024-12-05 20:49:10.071129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.686 [2024-12-05 20:49:10.076485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.686 [2024-12-05 20:49:10.076505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.686 [2024-12-05 20:49:10.076513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.686 [2024-12-05 20:49:10.081036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.686 [2024-12-05 20:49:10.081055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.686 [2024-12-05 20:49:10.081072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.686 [2024-12-05 20:49:10.083981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.686 [2024-12-05 20:49:10.083999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.686 [2024-12-05 20:49:10.084007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.686 [2024-12-05 20:49:10.089846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.686 [2024-12-05 20:49:10.089864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.686 [2024-12-05 20:49:10.089872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.686 [2024-12-05 20:49:10.095026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.686 [2024-12-05 20:49:10.095045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.686 [2024-12-05 20:49:10.095053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.686 [2024-12-05 20:49:10.100084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.686 [2024-12-05 20:49:10.100102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.686 [2024-12-05 20:49:10.100110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.686 [2024-12-05 20:49:10.105223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.686 [2024-12-05 20:49:10.105242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.686 [2024-12-05 20:49:10.105249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.686 [2024-12-05 20:49:10.109952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.686 [2024-12-05 20:49:10.109970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.686 [2024-12-05 20:49:10.109977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.686 [2024-12-05 20:49:10.114947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.686 [2024-12-05 20:49:10.114966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.686 [2024-12-05 20:49:10.114973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.686 [2024-12-05 20:49:10.119952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.686 [2024-12-05 20:49:10.119971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.686 [2024-12-05 20:49:10.119979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.946 [2024-12-05 20:49:10.124932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.946 [2024-12-05 20:49:10.124957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.947 [2024-12-05 20:49:10.124965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.947 [2024-12-05 20:49:10.129849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.947 [2024-12-05 20:49:10.129869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.947 [2024-12-05 20:49:10.129876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.947 [2024-12-05 20:49:10.134640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.947 [2024-12-05 20:49:10.134660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.947 [2024-12-05 20:49:10.134667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.947 [2024-12-05 20:49:10.139578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.947 [2024-12-05 20:49:10.139597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.947 [2024-12-05 20:49:10.139604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.947 [2024-12-05 20:49:10.144589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.947 [2024-12-05 20:49:10.144609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.947 [2024-12-05 20:49:10.144616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.947 [2024-12-05 20:49:10.149672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.947 [2024-12-05 20:49:10.149692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.947 [2024-12-05 20:49:10.149699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.947 [2024-12-05 20:49:10.154606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.947 [2024-12-05 20:49:10.154625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.947 [2024-12-05 20:49:10.154632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.947 [2024-12-05 20:49:10.159406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.947 [2024-12-05 20:49:10.159425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.947 [2024-12-05 20:49:10.159432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.947 [2024-12-05 20:49:10.164310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.947 [2024-12-05 20:49:10.164329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.947 [2024-12-05 20:49:10.164341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.947 [2024-12-05 20:49:10.169453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.947 [2024-12-05 20:49:10.169472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.947 [2024-12-05 20:49:10.169480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.947 [2024-12-05 20:49:10.174640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.947 [2024-12-05 20:49:10.174659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.947 [2024-12-05 20:49:10.174666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.947 [2024-12-05 20:49:10.179615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.947 [2024-12-05 20:49:10.179634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.947 [2024-12-05 20:49:10.179642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.947 [2024-12-05 20:49:10.184643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.947 [2024-12-05 20:49:10.184662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.947 [2024-12-05 20:49:10.184669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.947 [2024-12-05 20:49:10.189717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.947 [2024-12-05 20:49:10.189736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.947 [2024-12-05 20:49:10.189744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.947 [2024-12-05 20:49:10.194736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.947 [2024-12-05 20:49:10.194755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.947 [2024-12-05 20:49:10.194763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.947 [2024-12-05 20:49:10.199895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.947 [2024-12-05 20:49:10.199914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.947 [2024-12-05 20:49:10.199921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.947 [2024-12-05 20:49:10.204883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.947 [2024-12-05 20:49:10.204903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.947 [2024-12-05 20:49:10.204910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.947 [2024-12-05 20:49:10.210004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.947 [2024-12-05 20:49:10.210027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.947 [2024-12-05 20:49:10.210034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.947 [2024-12-05 20:49:10.215242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.947 [2024-12-05 20:49:10.215262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.947 [2024-12-05 20:49:10.215270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.947 [2024-12-05 20:49:10.220361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.947 [2024-12-05 20:49:10.220382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.947 [2024-12-05 20:49:10.220389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.947 [2024-12-05 20:49:10.225962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.947 [2024-12-05 20:49:10.225982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.947 [2024-12-05 20:49:10.225989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.947 [2024-12-05 20:49:10.231104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.947 [2024-12-05 20:49:10.231123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.947 [2024-12-05 20:49:10.231130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.947 [2024-12-05 20:49:10.236440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.947 [2024-12-05 20:49:10.236461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.947 [2024-12-05 20:49:10.236468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.947 [2024-12-05 20:49:10.241933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.947 [2024-12-05 20:49:10.241953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.947 [2024-12-05 20:49:10.241961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.947 [2024-12-05 20:49:10.247073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.947 [2024-12-05 20:49:10.247092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.947 [2024-12-05 20:49:10.247100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.947 [2024-12-05 20:49:10.252258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.948 [2024-12-05 20:49:10.252278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.948 [2024-12-05 20:49:10.252285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.948 [2024-12-05 20:49:10.258571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.948 [2024-12-05 20:49:10.258590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.948 [2024-12-05 20:49:10.258597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.948 5939.00 IOPS, 742.38 MiB/s [2024-12-05T19:49:10.389Z] [2024-12-05 20:49:10.263769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.948 [2024-12-05 20:49:10.263789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.948 [2024-12-05 20:49:10.263796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.948 [2024-12-05 20:49:10.268900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.948 [2024-12-05 20:49:10.268920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.948 [2024-12-05 20:49:10.268927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.948 [2024-12-05 20:49:10.273714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.948 [2024-12-05 20:49:10.273735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.948 [2024-12-05 20:49:10.273742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.948 [2024-12-05 20:49:10.278701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.948 [2024-12-05 20:49:10.278721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.948 [2024-12-05 20:49:10.278727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.948 [2024-12-05 20:49:10.284128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.948 [2024-12-05 20:49:10.284147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.948 [2024-12-05 20:49:10.284154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.948 [2024-12-05 20:49:10.289460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.948 [2024-12-05 20:49:10.289480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.948 [2024-12-05 20:49:10.289487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.948 [2024-12-05 20:49:10.294087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.948 [2024-12-05 20:49:10.294107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.948 [2024-12-05 20:49:10.294114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.948 [2024-12-05 20:49:10.299161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.948 [2024-12-05 20:49:10.299181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.948 [2024-12-05 20:49:10.299191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.948 [2024-12-05 20:49:10.304232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.948 [2024-12-05 20:49:10.304251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.948 [2024-12-05 20:49:10.304259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.948 [2024-12-05 20:49:10.309260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.948 [2024-12-05 20:49:10.309280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.948 [2024-12-05 20:49:10.309288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.948 [2024-12-05 20:49:10.314470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.948 [2024-12-05 20:49:10.314491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.948 [2024-12-05 20:49:10.314498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.948 [2024-12-05 20:49:10.320132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.948 [2024-12-05 20:49:10.320152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.948 [2024-12-05 20:49:10.320159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.948 [2024-12-05 20:49:10.326992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.948 [2024-12-05 20:49:10.327012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.948 [2024-12-05 20:49:10.327020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.948 [2024-12-05 20:49:10.333585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.948 [2024-12-05 20:49:10.333606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.948 [2024-12-05 20:49:10.333613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.948 [2024-12-05 20:49:10.341623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.948 [2024-12-05 20:49:10.341644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.948 [2024-12-05 20:49:10.341652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.948 [2024-12-05 20:49:10.349334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.948 [2024-12-05 20:49:10.349355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.948 [2024-12-05 20:49:10.349363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.948 [2024-12-05 20:49:10.355698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.948 [2024-12-05 20:49:10.355721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.948 [2024-12-05 20:49:10.355729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.948 [2024-12-05 20:49:10.361988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.948 [2024-12-05 20:49:10.362009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.948 [2024-12-05 20:49:10.362016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.948 [2024-12-05 20:49:10.368796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.948 [2024-12-05 20:49:10.368817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.948 [2024-12-05 20:49:10.368825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.948 [2024-12-05 20:49:10.375402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.948 [2024-12-05 20:49:10.375424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.948 [2024-12-05 20:49:10.375432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:16.948 [2024-12-05 20:49:10.382945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:16.948 [2024-12-05 20:49:10.382965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.948 [2024-12-05 20:49:10.382972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.208 [2024-12-05 20:49:10.386624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.208 [2024-12-05 20:49:10.386646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.208 [2024-12-05 20:49:10.386653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.208 [2024-12-05 20:49:10.393744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.208 [2024-12-05 20:49:10.393765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.208 [2024-12-05 20:49:10.393772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.208 [2024-12-05 20:49:10.399320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.208 [2024-12-05 20:49:10.399339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.208 [2024-12-05 20:49:10.399347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.208 [2024-12-05 20:49:10.404442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.208 [2024-12-05 20:49:10.404462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.209 [2024-12-05 20:49:10.404469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.209 [2024-12-05 20:49:10.409381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.209 [2024-12-05 20:49:10.409401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.209 [2024-12-05 20:49:10.409408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.209 [2024-12-05 20:49:10.415474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.209 [2024-12-05 20:49:10.415496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.209 [2024-12-05 20:49:10.415503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.209 [2024-12-05 20:49:10.422373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.209 [2024-12-05 20:49:10.422394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.209 [2024-12-05 20:49:10.422402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.209 [2024-12-05 20:49:10.429168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.209 [2024-12-05 20:49:10.429190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.209 [2024-12-05 20:49:10.429197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.209 [2024-12-05 20:49:10.435076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.209 [2024-12-05 20:49:10.435097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.209 [2024-12-05 20:49:10.435104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.209 [2024-12-05 20:49:10.440285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.209 [2024-12-05 20:49:10.440305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.209 [2024-12-05 20:49:10.440312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.209 [2024-12-05 20:49:10.445362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.209 [2024-12-05 20:49:10.445382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.209 [2024-12-05 20:49:10.445389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.209 [2024-12-05 20:49:10.450827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.209 [2024-12-05 20:49:10.450847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.209 [2024-12-05 20:49:10.450854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.209 [2024-12-05 20:49:10.456489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.209 [2024-12-05 20:49:10.456512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.209 [2024-12-05 20:49:10.456519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.209 [2024-12-05 20:49:10.461542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.209 [2024-12-05 20:49:10.461562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.209 [2024-12-05 20:49:10.461569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.209 [2024-12-05 20:49:10.466599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.209 [2024-12-05 20:49:10.466619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.209 [2024-12-05 20:49:10.466626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.209 [2024-12-05 20:49:10.471621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.209 [2024-12-05 20:49:10.471641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.209 [2024-12-05 20:49:10.471649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.209 [2024-12-05 20:49:10.476718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.209 [2024-12-05 20:49:10.476738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.209 [2024-12-05 20:49:10.476745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.209 [2024-12-05 20:49:10.481795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.209 [2024-12-05 20:49:10.481815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.209 [2024-12-05 20:49:10.481823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.209 [2024-12-05 20:49:10.486844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.209 [2024-12-05 20:49:10.486864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.209 [2024-12-05 20:49:10.486871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.209 [2024-12-05 20:49:10.491905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.209 [2024-12-05 20:49:10.491925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.209 [2024-12-05 20:49:10.491933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.209 [2024-12-05 20:49:10.496934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.209 [2024-12-05 20:49:10.496954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.209 [2024-12-05 20:49:10.496961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.209 [2024-12-05 20:49:10.502040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.209 [2024-12-05 20:49:10.502065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.209 [2024-12-05 20:49:10.502073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.209 [2024-12-05 20:49:10.507139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.209 [2024-12-05 20:49:10.507158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.209 [2024-12-05 20:49:10.507166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.209 [2024-12-05 20:49:10.512161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.209 [2024-12-05 20:49:10.512181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.209 [2024-12-05 20:49:10.512188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.209 [2024-12-05 20:49:10.517202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.209 [2024-12-05 20:49:10.517221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.209 [2024-12-05 20:49:10.517228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.209 [2024-12-05 20:49:10.522271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.209 [2024-12-05 20:49:10.522290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.209 [2024-12-05 20:49:10.522297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.209 [2024-12-05 20:49:10.527288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.209 [2024-12-05 20:49:10.527307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.209 [2024-12-05 20:49:10.527315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.209 [2024-12-05 20:49:10.532318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.210 [2024-12-05 20:49:10.532337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.210 [2024-12-05 20:49:10.532345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.210 [2024-12-05 20:49:10.537344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.210 [2024-12-05 20:49:10.537363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.210 [2024-12-05 20:49:10.537370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.210 [2024-12-05 20:49:10.542350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.210 [2024-12-05 20:49:10.542369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.210 [2024-12-05 20:49:10.542380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.210 [2024-12-05 20:49:10.547373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.210 [2024-12-05 20:49:10.547393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.210 [2024-12-05 20:49:10.547400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.210 [2024-12-05 20:49:10.552430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.210 [2024-12-05 20:49:10.552450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.210 [2024-12-05 20:49:10.552458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.210 [2024-12-05 20:49:10.557457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.210 [2024-12-05 20:49:10.557477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.210 [2024-12-05 20:49:10.557484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.210 [2024-12-05 20:49:10.562504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.210 [2024-12-05 20:49:10.562524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.210 [2024-12-05 20:49:10.562531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.210 [2024-12-05 20:49:10.567582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.210 [2024-12-05 20:49:10.567602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.210 [2024-12-05 20:49:10.567610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.210 [2024-12-05 20:49:10.572578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.210 [2024-12-05 20:49:10.572599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.210 [2024-12-05 20:49:10.572607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.210 [2024-12-05 20:49:10.577526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.210 [2024-12-05 20:49:10.577546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.210 [2024-12-05 20:49:10.577555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.210 [2024-12-05 20:49:10.582346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.210 [2024-12-05 20:49:10.582366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.210 [2024-12-05 20:49:10.582374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.210 [2024-12-05 20:49:10.587130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.210 [2024-12-05 20:49:10.587154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.210 [2024-12-05 20:49:10.587161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.210 [2024-12-05 20:49:10.591878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.210 [2024-12-05 20:49:10.591898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.210 [2024-12-05 20:49:10.591905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.210 [2024-12-05 20:49:10.597054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.210 [2024-12-05 20:49:10.597080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.210 [2024-12-05 20:49:10.597087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.210 [2024-12-05 20:49:10.603106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.210 [2024-12-05 20:49:10.603127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.210 [2024-12-05 20:49:10.603135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.210 [2024-12-05 20:49:10.607731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.210 [2024-12-05 20:49:10.607751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.210 [2024-12-05 20:49:10.607759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.210 [2024-12-05 20:49:10.612776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.210 [2024-12-05 20:49:10.612797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.210 [2024-12-05 20:49:10.612804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.210 [2024-12-05 20:49:10.617819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.210 [2024-12-05 20:49:10.617839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.210 [2024-12-05 20:49:10.617847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.210 [2024-12-05 20:49:10.622902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.210 [2024-12-05 20:49:10.622922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.210 [2024-12-05 20:49:10.622930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.210 [2024-12-05 20:49:10.627904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.210 [2024-12-05 20:49:10.627924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.210 [2024-12-05 20:49:10.627932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.210 [2024-12-05 20:49:10.633823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.210 [2024-12-05 20:49:10.633843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.210 [2024-12-05 20:49:10.633850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.210 [2024-12-05 20:49:10.639103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.210 [2024-12-05 20:49:10.639123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.210 [2024-12-05 20:49:10.639130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.210 [2024-12-05 20:49:10.644147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.210 [2024-12-05 20:49:10.644167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.210 [2024-12-05 20:49:10.644175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.470 [2024-12-05 20:49:10.649139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.470 [2024-12-05 20:49:10.649159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.470 [2024-12-05 20:49:10.649166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.470 [2024-12-05 20:49:10.654238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.470 [2024-12-05 20:49:10.654257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.470 [2024-12-05 20:49:10.654265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.470 [2024-12-05 20:49:10.659284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.470 [2024-12-05 20:49:10.659304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.470 [2024-12-05 20:49:10.659312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.470 [2024-12-05 20:49:10.664284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.470 [2024-12-05 20:49:10.664304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.471 [2024-12-05 20:49:10.664311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.471 [2024-12-05 20:49:10.669273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.471 [2024-12-05 20:49:10.669292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.471 [2024-12-05 20:49:10.669299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.471 [2024-12-05 20:49:10.674235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.471 [2024-12-05 20:49:10.674255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.471 [2024-12-05 20:49:10.674266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.471 [2024-12-05 20:49:10.679213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.471 [2024-12-05 20:49:10.679233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.471 [2024-12-05 20:49:10.679240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.471 [2024-12-05 20:49:10.684192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.471 [2024-12-05 20:49:10.684212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.471 [2024-12-05 20:49:10.684219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.471 [2024-12-05 20:49:10.689498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.471 [2024-12-05 20:49:10.689519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.471 [2024-12-05 20:49:10.689526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.471 [2024-12-05 20:49:10.695169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.471 [2024-12-05 20:49:10.695189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.471 [2024-12-05 20:49:10.695197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.471 [2024-12-05 20:49:10.702554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.471 [2024-12-05 20:49:10.702574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.471 [2024-12-05 20:49:10.702582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.471 [2024-12-05 20:49:10.709401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.471 [2024-12-05 20:49:10.709421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.471 [2024-12-05 20:49:10.709428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.471 [2024-12-05 20:49:10.716881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.471 [2024-12-05 20:49:10.716901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.471 [2024-12-05 20:49:10.716909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.471 [2024-12-05 20:49:10.723988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.471 [2024-12-05 20:49:10.724009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.471 [2024-12-05 20:49:10.724016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.471 [2024-12-05 20:49:10.731153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.471 [2024-12-05 20:49:10.731174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.471 [2024-12-05 20:49:10.731181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.471 [2024-12-05 20:49:10.738150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.471 [2024-12-05 20:49:10.738170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.471 [2024-12-05 20:49:10.738177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.471 [2024-12-05 20:49:10.742307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.471 [2024-12-05 20:49:10.742325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.471 [2024-12-05 20:49:10.742333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.471 [2024-12-05 20:49:10.749022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.471 [2024-12-05 20:49:10.749041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.471 [2024-12-05 20:49:10.749049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.471 [2024-12-05 20:49:10.756454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.471 [2024-12-05 20:49:10.756473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.471 [2024-12-05 20:49:10.756480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.471 [2024-12-05 20:49:10.763385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.471 [2024-12-05 20:49:10.763404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.471 [2024-12-05 20:49:10.763412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.471 [2024-12-05 20:49:10.770692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.471 [2024-12-05 20:49:10.770712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.471 [2024-12-05 20:49:10.770719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.471 [2024-12-05 20:49:10.777950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.471 [2024-12-05 20:49:10.777970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.471 [2024-12-05 20:49:10.777978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.471 [2024-12-05 20:49:10.785009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.471 [2024-12-05 20:49:10.785029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.471 [2024-12-05 20:49:10.785039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.471 [2024-12-05 20:49:10.792851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.471 [2024-12-05 20:49:10.792871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.471 [2024-12-05 20:49:10.792878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.471 [2024-12-05 20:49:10.799614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.471 [2024-12-05 20:49:10.799634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.471 [2024-12-05 20:49:10.799641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.471 [2024-12-05 20:49:10.806778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.471 [2024-12-05 20:49:10.806797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.472 [2024-12-05 20:49:10.806805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.472 [2024-12-05 20:49:10.814666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.472 [2024-12-05 20:49:10.814686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.472 [2024-12-05 20:49:10.814693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.472 [2024-12-05 20:49:10.823296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.472 [2024-12-05 20:49:10.823316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.472 [2024-12-05 20:49:10.823324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.472 [2024-12-05 20:49:10.831273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.472 [2024-12-05 20:49:10.831294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.472 [2024-12-05 20:49:10.831302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.472 [2024-12-05 20:49:10.839748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.472 [2024-12-05 20:49:10.839767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.472 [2024-12-05 20:49:10.839775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.472 [2024-12-05 20:49:10.847757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.472 [2024-12-05 20:49:10.847777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.472 [2024-12-05 20:49:10.847785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.472 [2024-12-05 20:49:10.855713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.472 [2024-12-05 20:49:10.855740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.472 [2024-12-05 20:49:10.855747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.472 [2024-12-05 20:49:10.862458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.472 [2024-12-05 20:49:10.862478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.472 [2024-12-05 20:49:10.862485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.472 [2024-12-05 20:49:10.869452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.472 [2024-12-05 20:49:10.869473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.472 [2024-12-05 20:49:10.869480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.472 [2024-12-05 20:49:10.876365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.472 [2024-12-05 20:49:10.876386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.472 [2024-12-05 20:49:10.876393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.472 [2024-12-05 20:49:10.882764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.472 [2024-12-05 20:49:10.882783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.472 [2024-12-05 20:49:10.882791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.472 [2024-12-05 20:49:10.889682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.472 [2024-12-05 20:49:10.889703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.472 [2024-12-05 20:49:10.889710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.472 [2024-12-05 20:49:10.898000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.472 [2024-12-05 20:49:10.898021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.472 [2024-12-05 20:49:10.898028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.472 [2024-12-05 20:49:10.905265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.472 [2024-12-05 20:49:10.905286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.472 [2024-12-05 20:49:10.905293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.732 [2024-12-05 20:49:10.911116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.732 [2024-12-05 20:49:10.911149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.732 [2024-12-05 20:49:10.911157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.732 [2024-12-05 20:49:10.918368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.732 [2024-12-05 20:49:10.918388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.732 [2024-12-05 20:49:10.918395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.732 [2024-12-05 20:49:10.925507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.732 [2024-12-05 20:49:10.925528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.732 [2024-12-05 20:49:10.925535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.732 [2024-12-05 20:49:10.933460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.732 [2024-12-05 20:49:10.933482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.732 [2024-12-05 20:49:10.933489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.732 [2024-12-05 20:49:10.940923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.732 [2024-12-05 20:49:10.940946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.732 [2024-12-05 20:49:10.940954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.732 [2024-12-05 20:49:10.947095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.732 [2024-12-05 20:49:10.947117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.732 [2024-12-05 20:49:10.947124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.732 [2024-12-05 20:49:10.952358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.732 [2024-12-05 20:49:10.952380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.732 [2024-12-05 20:49:10.952387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.732 [2024-12-05 20:49:10.958336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.732 [2024-12-05 20:49:10.958358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.732 [2024-12-05 20:49:10.958365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.732 [2024-12-05 20:49:10.963975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.732 [2024-12-05 20:49:10.963997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.732 [2024-12-05 20:49:10.964004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.732 [2024-12-05 20:49:10.971184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.732 [2024-12-05 20:49:10.971206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.732 [2024-12-05 20:49:10.971217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.732 [2024-12-05 20:49:10.978327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.732 [2024-12-05 20:49:10.978348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.732 [2024-12-05 20:49:10.978356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.732 [2024-12-05 20:49:10.985488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.732 [2024-12-05 20:49:10.985508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.732 [2024-12-05 20:49:10.985515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.732 [2024-12-05 20:49:10.991559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.732 [2024-12-05 20:49:10.991579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.732 [2024-12-05 20:49:10.991587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.732 [2024-12-05 20:49:10.998506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.732 [2024-12-05 20:49:10.998527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.732 [2024-12-05 20:49:10.998534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.732 [2024-12-05 20:49:11.006380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.733 [2024-12-05 20:49:11.006401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.733 [2024-12-05 20:49:11.006408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.733 [2024-12-05 20:49:11.013173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.733 [2024-12-05 20:49:11.013193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.733 [2024-12-05 20:49:11.013200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.733 [2024-12-05 20:49:11.018616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.733 [2024-12-05 20:49:11.018637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.733 [2024-12-05 20:49:11.018644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.733 [2024-12-05 20:49:11.023596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.733 [2024-12-05 20:49:11.023617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.733 [2024-12-05 20:49:11.023625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.733 [2024-12-05 20:49:11.028715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.733 [2024-12-05 20:49:11.028739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.733 [2024-12-05 20:49:11.028746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.733 [2024-12-05 20:49:11.033695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.733 [2024-12-05 20:49:11.033715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.733 [2024-12-05 20:49:11.033722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.733 [2024-12-05 20:49:11.038664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.733 [2024-12-05 20:49:11.038684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.733 [2024-12-05 20:49:11.038691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.733 [2024-12-05 20:49:11.043682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.733 [2024-12-05 20:49:11.043702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.733 [2024-12-05 20:49:11.043710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.733 [2024-12-05 20:49:11.048626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.733 [2024-12-05 20:49:11.048645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.733 [2024-12-05 20:49:11.048653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.733 [2024-12-05 20:49:11.053592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.733 [2024-12-05 20:49:11.053611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.733 [2024-12-05 20:49:11.053618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.733 [2024-12-05 20:49:11.058657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.733 [2024-12-05 20:49:11.058677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.733 [2024-12-05 20:49:11.058685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.733 [2024-12-05 20:49:11.063675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.733 [2024-12-05 20:49:11.063694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.733 [2024-12-05 20:49:11.063701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.733 [2024-12-05 20:49:11.068690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.733 [2024-12-05 20:49:11.068710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.733 [2024-12-05 20:49:11.068717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.733 [2024-12-05 20:49:11.073668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.733 [2024-12-05 20:49:11.073687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.733 [2024-12-05 20:49:11.073694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.733 [2024-12-05 20:49:11.078729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.733 [2024-12-05 20:49:11.078748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.733 [2024-12-05 20:49:11.078755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.733 [2024-12-05 20:49:11.083776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.733 [2024-12-05 20:49:11.083796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.733 [2024-12-05 20:49:11.083803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.733 [2024-12-05 20:49:11.088719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.733 [2024-12-05 20:49:11.088739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.733 [2024-12-05 20:49:11.088746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.733 [2024-12-05 20:49:11.093694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.733 [2024-12-05 20:49:11.093714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.733 [2024-12-05 20:49:11.093721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.733 [2024-12-05 20:49:11.098713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.733 [2024-12-05 20:49:11.098733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.733 [2024-12-05 20:49:11.098740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.733 [2024-12-05 20:49:11.103662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.733 [2024-12-05 20:49:11.103682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.733 [2024-12-05 20:49:11.103689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.733 [2024-12-05 20:49:11.108637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.733 [2024-12-05 20:49:11.108657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.733 [2024-12-05 20:49:11.108664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.733 [2024-12-05 20:49:11.113622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.733 [2024-12-05 20:49:11.113642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.733 [2024-12-05 20:49:11.113653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.733 [2024-12-05 20:49:11.118570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.733 [2024-12-05 20:49:11.118590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.733 [2024-12-05 20:49:11.118596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.733 [2024-12-05 20:49:11.123521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.733 [2024-12-05 20:49:11.123541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.733 [2024-12-05 20:49:11.123548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.733 [2024-12-05 20:49:11.128446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.733 [2024-12-05 20:49:11.128465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.733 [2024-12-05 20:49:11.128472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.734 [2024-12-05 20:49:11.133405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.734 [2024-12-05 20:49:11.133424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.734 [2024-12-05 20:49:11.133431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.734 [2024-12-05 20:49:11.138370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.734 [2024-12-05 20:49:11.138390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.734 [2024-12-05 20:49:11.138397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.734 [2024-12-05 20:49:11.143292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.734 [2024-12-05 20:49:11.143311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.734 [2024-12-05 20:49:11.143318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.734 [2024-12-05 20:49:11.148245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.734 [2024-12-05 20:49:11.148266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.734 [2024-12-05 20:49:11.148274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.734 [2024-12-05 20:49:11.153216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.734 [2024-12-05 20:49:11.153236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.734 [2024-12-05 20:49:11.153242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.734 [2024-12-05 20:49:11.158214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.734 [2024-12-05 20:49:11.158233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.734 [2024-12-05 20:49:11.158240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.734 [2024-12-05 20:49:11.163246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.734 [2024-12-05 20:49:11.163266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.734 [2024-12-05 20:49:11.163273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.734 [2024-12-05 20:49:11.168460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.734 [2024-12-05 20:49:11.168480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.734 [2024-12-05 20:49:11.168487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.994 [2024-12-05 20:49:11.173621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.994 [2024-12-05 20:49:11.173641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.994 [2024-12-05 20:49:11.173648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.994 [2024-12-05 20:49:11.178717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.994 [2024-12-05 20:49:11.178737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.994 [2024-12-05 20:49:11.178744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.994 [2024-12-05 20:49:11.183833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.994 [2024-12-05 20:49:11.183852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.994 [2024-12-05 20:49:11.183859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.994 [2024-12-05 20:49:11.189165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.994 [2024-12-05 20:49:11.189185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.994 [2024-12-05 20:49:11.189193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.994 [2024-12-05 20:49:11.194342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.994 [2024-12-05 20:49:11.194362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.994 [2024-12-05 20:49:11.194369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.994 [2024-12-05 20:49:11.199612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.994 [2024-12-05 20:49:11.199631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.994 [2024-12-05 20:49:11.199642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.994 [2024-12-05 20:49:11.204782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.994 [2024-12-05 20:49:11.204802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.994 [2024-12-05 20:49:11.204809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.994 [2024-12-05 20:49:11.209681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.994 [2024-12-05 20:49:11.209700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.994 [2024-12-05 20:49:11.209707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.994 [2024-12-05 20:49:11.214685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.994 [2024-12-05 20:49:11.214704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.994 [2024-12-05 20:49:11.214711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.994 [2024-12-05 20:49:11.219682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.994 [2024-12-05 20:49:11.219702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.994 [2024-12-05 20:49:11.219708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.994 [2024-12-05 20:49:11.224569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.994 [2024-12-05 20:49:11.224588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.994 [2024-12-05 20:49:11.224595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.994 [2024-12-05 20:49:11.229559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.994 [2024-12-05 20:49:11.229578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.994 [2024-12-05 20:49:11.229585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.994 [2024-12-05 20:49:11.234719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.994 [2024-12-05 20:49:11.234738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.994 [2024-12-05 20:49:11.234745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.994 [2024-12-05 20:49:11.239767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.994 [2024-12-05 20:49:11.239787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.994 [2024-12-05 20:49:11.239794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.994 [2024-12-05 20:49:11.244763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.994 [2024-12-05 20:49:11.244786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.994 [2024-12-05 20:49:11.244793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.994 [2024-12-05 20:49:11.249728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.994 [2024-12-05 20:49:11.249748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.994 [2024-12-05 20:49:11.249755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.994 [2024-12-05 20:49:11.254916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.994 [2024-12-05 20:49:11.254936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.994 [2024-12-05 20:49:11.254943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.994 [2024-12-05 20:49:11.261005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b1480) 00:29:17.994 [2024-12-05 20:49:11.261025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.994 [2024-12-05 20:49:11.261032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.994 5682.50 IOPS, 710.31 MiB/s 00:29:17.994 Latency(us) 00:29:17.994 [2024-12-05T19:49:11.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.994 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:17.994 nvme0n1 : 2.00 5685.29 710.66 0.00 0.00 2811.42 603.23 8757.99 00:29:17.994 [2024-12-05T19:49:11.435Z] =================================================================================================================== 00:29:17.994 [2024-12-05T19:49:11.435Z] Total : 5685.29 710.66 0.00 0.00 2811.42 603.23 8757.99 00:29:17.994 { 00:29:17.994 "results": [ 00:29:17.994 { 00:29:17.994 "job": "nvme0n1", 00:29:17.994 "core_mask": "0x2", 00:29:17.994 "workload": "randread", 00:29:17.994 "status": "finished", 00:29:17.994 "queue_depth": 16, 00:29:17.994 "io_size": 131072, 00:29:17.994 "runtime": 2.001832, 00:29:17.994 "iops": 5685.2922722785925, 00:29:17.994 "mibps": 710.6615340348241, 00:29:17.994 "io_failed": 0, 00:29:17.994 "io_timeout": 0, 00:29:17.994 "avg_latency_us": 2811.4171069805334, 00:29:17.994 "min_latency_us": 603.2290909090909, 00:29:17.994 "max_latency_us": 8757.992727272727 00:29:17.994 } 00:29:17.994 ], 00:29:17.994 "core_count": 1 00:29:17.994 } 00:29:17.995 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:17.995 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:17.995 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:17.995 | .driver_specific 00:29:17.995 | .nvme_error 00:29:17.995 | .status_code 00:29:17.995 | .command_transient_transport_error' 00:29:17.995 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:18.254 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 368 > 0 )) 00:29:18.254 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 523546 00:29:18.254 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 523546 ']' 00:29:18.254 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 523546 00:29:18.254 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:18.254 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:18.255 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 523546 00:29:18.255 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:18.255 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:18.255 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 523546' 00:29:18.255 killing process with pid 523546 00:29:18.255 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 523546 00:29:18.255 Received shutdown signal, test time was about 2.000000 seconds 00:29:18.255 00:29:18.255 Latency(us) 00:29:18.255 [2024-12-05T19:49:11.696Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.255 [2024-12-05T19:49:11.696Z] =================================================================================================================== 00:29:18.255 [2024-12-05T19:49:11.696Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:18.255 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 523546 00:29:18.255 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:18.255 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:18.255 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:18.255 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:18.255 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:18.255 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=524099 00:29:18.255 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 524099 /var/tmp/bperf.sock 00:29:18.255 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:18.255 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 524099 ']' 00:29:18.255 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:18.255 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:18.255 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:18.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:18.255 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:18.255 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:18.514 [2024-12-05 20:49:11.723071] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:29:18.514 [2024-12-05 20:49:11.723115] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524099 ] 00:29:18.514 [2024-12-05 20:49:11.793982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.514 [2024-12-05 20:49:11.828393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.514 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:18.514 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:18.514 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:18.514 20:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:18.773 20:49:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:18.773 20:49:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.773 20:49:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:18.773 20:49:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.773 20:49:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:18.773 20:49:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:19.342 nvme0n1 00:29:19.342 20:49:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:19.342 20:49:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.342 20:49:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:19.342 20:49:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.342 20:49:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:19.342 20:49:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:19.342 Running I/O for 2 seconds... 00:29:19.342 [2024-12-05 20:49:12.637727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef96f8 00:29:19.342 [2024-12-05 20:49:12.638453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.342 [2024-12-05 20:49:12.638482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:19.342 [2024-12-05 20:49:12.646796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee9168 00:29:19.342 [2024-12-05 20:49:12.647516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.342 [2024-12-05 20:49:12.647539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:19.342 [2024-12-05 20:49:12.654475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efbcf0 00:29:19.342 [2024-12-05 20:49:12.655171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.342 [2024-12-05 20:49:12.655189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:19.342 [2024-12-05 20:49:12.663409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efbcf0 00:29:19.342 [2024-12-05 20:49:12.664085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.342 [2024-12-05 20:49:12.664119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:19.342 [2024-12-05 20:49:12.672136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef1868 00:29:19.342 [2024-12-05 20:49:12.673081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.343 [2024-12-05 20:49:12.673099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:19.343 [2024-12-05 20:49:12.680844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef96f8 00:29:19.343 [2024-12-05 20:49:12.681852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.343 [2024-12-05 20:49:12.681870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:19.343 [2024-12-05 20:49:12.689505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eeb760 00:29:19.343 [2024-12-05 20:49:12.690626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.343 [2024-12-05 20:49:12.690643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:19.343 [2024-12-05 20:49:12.697728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef4f40 00:29:19.343 [2024-12-05 20:49:12.698634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.343 [2024-12-05 20:49:12.698651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:19.343 [2024-12-05 20:49:12.706826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef0788 00:29:19.343 [2024-12-05 20:49:12.708178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.343 [2024-12-05 20:49:12.708195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:19.343 [2024-12-05 20:49:12.713486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef2d80 00:29:19.343 [2024-12-05 20:49:12.714412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.343 [2024-12-05 20:49:12.714429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:19.343 [2024-12-05 20:49:12.722188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef6890 00:29:19.343 [2024-12-05 20:49:12.723182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.343 [2024-12-05 20:49:12.723198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:19.343 [2024-12-05 20:49:12.728793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eecc78 00:29:19.343 [2024-12-05 20:49:12.729359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.343 [2024-12-05 20:49:12.729377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:19.343 [2024-12-05 20:49:12.737595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eea680 00:29:19.343 [2024-12-05 20:49:12.738284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.343 [2024-12-05 20:49:12.738301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:19.343 [2024-12-05 20:49:12.746292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eed920 00:29:19.343 [2024-12-05 20:49:12.747089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.343 [2024-12-05 20:49:12.747106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:19.343 [2024-12-05 20:49:12.756271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eed920 00:29:19.343 [2024-12-05 20:49:12.757500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.343 [2024-12-05 20:49:12.757518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:19.343 [2024-12-05 20:49:12.764926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef9f68 00:29:19.343 [2024-12-05 20:49:12.766291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.343 [2024-12-05 20:49:12.766308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:19.343 [2024-12-05 20:49:12.771571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef0bc0 00:29:19.343 [2024-12-05 20:49:12.772433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.343 [2024-12-05 20:49:12.772450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:19.343 [2024-12-05 20:49:12.781724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef6890 00:29:19.603 [2024-12-05 20:49:12.783085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.603 [2024-12-05 20:49:12.783103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:19.603 [2024-12-05 20:49:12.790211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efe2e8 00:29:19.603 [2024-12-05 20:49:12.791549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.603 [2024-12-05 20:49:12.791567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:19.603 [2024-12-05 20:49:12.796159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef96f8 00:29:19.603 [2024-12-05 20:49:12.796707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.603 [2024-12-05 20:49:12.796724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:19.603 [2024-12-05 20:49:12.804655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efdeb0 00:29:19.603 [2024-12-05 20:49:12.805224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.603 [2024-12-05 20:49:12.805242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:19.603 [2024-12-05 20:49:12.815209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef4f40 00:29:19.603 [2024-12-05 20:49:12.816434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.603 [2024-12-05 20:49:12.816455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:19.603 [2024-12-05 20:49:12.823886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee3060 00:29:19.603 [2024-12-05 20:49:12.825241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.603 [2024-12-05 20:49:12.825259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:19.603 [2024-12-05 20:49:12.829750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efc128 00:29:19.603 [2024-12-05 20:49:12.830292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.603 [2024-12-05 20:49:12.830310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:19.603 [2024-12-05 20:49:12.838105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee2c28 00:29:19.603 [2024-12-05 20:49:12.838671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.603 [2024-12-05 20:49:12.838689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:19.603 [2024-12-05 20:49:12.846445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef4f40 00:29:19.604 [2024-12-05 20:49:12.846972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.604 [2024-12-05 20:49:12.846988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:19.604 [2024-12-05 20:49:12.854006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efb8b8 00:29:19.604 [2024-12-05 20:49:12.854532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.604 [2024-12-05 20:49:12.854549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:19.604 [2024-12-05 20:49:12.862763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efb8b8 00:29:19.604 [2024-12-05 20:49:12.863288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.604 [2024-12-05 20:49:12.863306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.604 [2024-12-05 20:49:12.870965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef96f8 00:29:19.604 [2024-12-05 20:49:12.871483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.604 [2024-12-05 20:49:12.871501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.604 [2024-12-05 20:49:12.879303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eefae0 00:29:19.604 [2024-12-05 20:49:12.879810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.604 [2024-12-05 20:49:12.879827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:19.604 [2024-12-05 20:49:12.886866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee6b70 00:29:19.604 [2024-12-05 20:49:12.887384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.604 [2024-12-05 20:49:12.887405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.604 [2024-12-05 20:49:12.896699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee6b70 00:29:19.604 [2024-12-05 20:49:12.897672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.604 [2024-12-05 20:49:12.897689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:19.604 [2024-12-05 20:49:12.903997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee6300 00:29:19.604 [2024-12-05 20:49:12.904520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.604 [2024-12-05 20:49:12.904537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:19.604 [2024-12-05 20:49:12.912376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee6300 00:29:19.604 [2024-12-05 20:49:12.912880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.604 [2024-12-05 20:49:12.912897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:19.604 [2024-12-05 20:49:12.921862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efa7d8 00:29:19.604 [2024-12-05 20:49:12.922892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.604 [2024-12-05 20:49:12.922910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:19.604 [2024-12-05 20:49:12.930252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee49b0 00:29:19.604 [2024-12-05 20:49:12.931220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.604 [2024-12-05 20:49:12.931238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:19.604 [2024-12-05 20:49:12.938107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef81e0 00:29:19.604 [2024-12-05 20:49:12.938812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.604 [2024-12-05 20:49:12.938829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:19.604 [2024-12-05 20:49:12.945652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef1868 00:29:19.604 [2024-12-05 20:49:12.946377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.604 [2024-12-05 20:49:12.946394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:19.604 [2024-12-05 20:49:12.954420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef1868 00:29:19.604 [2024-12-05 20:49:12.955104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.604 [2024-12-05 20:49:12.955120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:19.604 [2024-12-05 20:49:12.962867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef7970 00:29:19.604 [2024-12-05 20:49:12.963523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.604 [2024-12-05 20:49:12.963539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:19.604 [2024-12-05 20:49:12.970282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee6300 00:29:19.604 [2024-12-05 20:49:12.970945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.604 [2024-12-05 20:49:12.970962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:19.604 [2024-12-05 20:49:12.980091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee6300 00:29:19.604 [2024-12-05 20:49:12.981189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.604 [2024-12-05 20:49:12.981206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:19.604 [2024-12-05 20:49:12.987250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef92c0 00:29:19.604 [2024-12-05 20:49:12.987916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.604 [2024-12-05 20:49:12.987933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:19.604 [2024-12-05 20:49:12.995482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee4578 00:29:19.604 [2024-12-05 20:49:12.996124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.604 [2024-12-05 20:49:12.996140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:19.604 [2024-12-05 20:49:13.003898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef1ca0 00:29:19.604 [2024-12-05 20:49:13.004520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.604 [2024-12-05 20:49:13.004537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:19.604 [2024-12-05 20:49:13.011327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ede470 00:29:19.604 [2024-12-05 20:49:13.011930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.604 [2024-12-05 20:49:13.011946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:19.604 [2024-12-05 20:49:13.020069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ede470 00:29:19.604 [2024-12-05 20:49:13.020675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.604 [2024-12-05 20:49:13.020692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:19.604 [2024-12-05 20:49:13.028301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ede470 00:29:19.604 [2024-12-05 20:49:13.028912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.604 [2024-12-05 20:49:13.028929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:19.604 [2024-12-05 20:49:13.036757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee49b0 00:29:19.604 [2024-12-05 20:49:13.037275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.604 [2024-12-05 20:49:13.037293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:19.864 [2024-12-05 20:49:13.045613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eea680 00:29:19.864 [2024-12-05 20:49:13.046237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.864 [2024-12-05 20:49:13.046254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:19.864 [2024-12-05 20:49:13.053879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef0788 00:29:19.864 [2024-12-05 20:49:13.054738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.864 [2024-12-05 20:49:13.054755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:19.864 [2024-12-05 20:49:13.062256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eef6a8 00:29:19.864 [2024-12-05 20:49:13.063135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:90 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.864 [2024-12-05 20:49:13.063153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:19.864 [2024-12-05 20:49:13.070563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eef6a8 00:29:19.864 [2024-12-05 20:49:13.071426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.864 [2024-12-05 20:49:13.071443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:19.864 [2024-12-05 20:49:13.078925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eea680 00:29:19.865 [2024-12-05 20:49:13.079770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.865 [2024-12-05 20:49:13.079787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:19.865 [2024-12-05 20:49:13.086609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016edf118 00:29:19.865 [2024-12-05 20:49:13.087453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.865 [2024-12-05 20:49:13.087470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:19.865 [2024-12-05 20:49:13.096511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016edf118 00:29:19.865 [2024-12-05 20:49:13.097784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.865 [2024-12-05 20:49:13.097800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:19.865 [2024-12-05 20:49:13.103678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee84c0 00:29:19.865 [2024-12-05 20:49:13.104475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.865 [2024-12-05 20:49:13.104497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:19.865 [2024-12-05 20:49:13.112196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef8a50 00:29:19.865 [2024-12-05 20:49:13.113123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.865 [2024-12-05 20:49:13.113140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:19.865 [2024-12-05 20:49:13.120824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef5378 00:29:19.865 [2024-12-05 20:49:13.121880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.865 [2024-12-05 20:49:13.121897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:19.865 [2024-12-05 20:49:13.127433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef0350 00:29:19.865 [2024-12-05 20:49:13.127996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.865 [2024-12-05 20:49:13.128013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:19.865 [2024-12-05 20:49:13.136094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eed920 00:29:19.865 [2024-12-05 20:49:13.136800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.865 [2024-12-05 20:49:13.136817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:19.865 [2024-12-05 20:49:13.144874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eeb760 00:29:19.865 [2024-12-05 20:49:13.145697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.865 [2024-12-05 20:49:13.145713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:19.865 [2024-12-05 20:49:13.153649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef1868 00:29:19.865 [2024-12-05 20:49:13.154577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.865 [2024-12-05 20:49:13.154594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:19.865 [2024-12-05 20:49:13.162161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef1ca0 00:29:19.865 [2024-12-05 20:49:13.163077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.865 [2024-12-05 20:49:13.163094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:19.865 [2024-12-05 20:49:13.170551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef1ca0 00:29:19.865 [2024-12-05 20:49:13.171472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.865 [2024-12-05 20:49:13.171489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:19.865 [2024-12-05 20:49:13.178195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016edf988 00:29:19.865 [2024-12-05 20:49:13.179442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.865 [2024-12-05 20:49:13.179459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:19.865 [2024-12-05 20:49:13.187429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efc560 00:29:19.865 [2024-12-05 20:49:13.188470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.865 [2024-12-05 20:49:13.188487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:19.865 [2024-12-05 20:49:13.195315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee9168 00:29:19.865 [2024-12-05 20:49:13.196218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.865 [2024-12-05 20:49:13.196237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:19.865 [2024-12-05 20:49:13.203041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee95a0 00:29:19.865 [2024-12-05 20:49:13.203822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.865 [2024-12-05 20:49:13.203838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:19.865 [2024-12-05 20:49:13.210762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eea680 00:29:19.865 [2024-12-05 20:49:13.211428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.865 [2024-12-05 20:49:13.211444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:19.865 [2024-12-05 20:49:13.219273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef3e60 00:29:19.865 [2024-12-05 20:49:13.219936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.865 [2024-12-05 20:49:13.219953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:19.865 [2024-12-05 20:49:13.227909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee6738 00:29:19.865 [2024-12-05 20:49:13.228715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.865 [2024-12-05 20:49:13.228733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:19.865 [2024-12-05 20:49:13.236566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee1b48 00:29:19.865 [2024-12-05 20:49:13.237487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.865 [2024-12-05 20:49:13.237504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:19.865 [2024-12-05 20:49:13.244947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee6b70 00:29:19.865 [2024-12-05 20:49:13.245980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.865 [2024-12-05 20:49:13.245997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:19.865 [2024-12-05 20:49:13.254376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee6b70 00:29:19.865 [2024-12-05 20:49:13.255685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.865 [2024-12-05 20:49:13.255701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:19.865 [2024-12-05 20:49:13.261677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee9e10 00:29:19.865 [2024-12-05 20:49:13.262552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.865 [2024-12-05 20:49:13.262570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:19.865 [2024-12-05 20:49:13.269874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee9e10 00:29:19.865 [2024-12-05 20:49:13.270780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.865 [2024-12-05 20:49:13.270797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:19.865 [2024-12-05 20:49:13.278055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efac10 00:29:19.865 [2024-12-05 20:49:13.278923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.865 [2024-12-05 20:49:13.278940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:19.865 [2024-12-05 20:49:13.286503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee5a90 00:29:19.865 [2024-12-05 20:49:13.287375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.866 [2024-12-05 20:49:13.287392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:19.866 [2024-12-05 20:49:13.294628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee5a90 00:29:19.866 [2024-12-05 20:49:13.295504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.866 [2024-12-05 20:49:13.295521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:19.866 [2024-12-05 20:49:13.303031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee5a90 00:29:20.125 [2024-12-05 20:49:13.303914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.125 [2024-12-05 20:49:13.303931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:20.125 [2024-12-05 20:49:13.311349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef0350 00:29:20.125 [2024-12-05 20:49:13.312219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.125 [2024-12-05 20:49:13.312237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:20.125 [2024-12-05 20:49:13.319649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efc998 00:29:20.125 [2024-12-05 20:49:13.320496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.125 [2024-12-05 20:49:13.320516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:20.125 [2024-12-05 20:49:13.327335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee4140 00:29:20.125 [2024-12-05 20:49:13.328170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.125 [2024-12-05 20:49:13.328186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:20.125 [2024-12-05 20:49:13.337117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee4140 00:29:20.125 [2024-12-05 20:49:13.338427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.126 [2024-12-05 20:49:13.338443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:20.126 [2024-12-05 20:49:13.344305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef0ff8 00:29:20.126 [2024-12-05 20:49:13.345142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.126 [2024-12-05 20:49:13.345159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:20.126 [2024-12-05 20:49:13.351949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eddc00 00:29:20.126 [2024-12-05 20:49:13.352775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.126 [2024-12-05 20:49:13.352791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:20.126 [2024-12-05 20:49:13.361705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eddc00 00:29:20.126 [2024-12-05 20:49:13.362961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.126 [2024-12-05 20:49:13.362984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:20.126 [2024-12-05 20:49:13.367503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef8618 00:29:20.126 [2024-12-05 20:49:13.367995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.126 [2024-12-05 20:49:13.368011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:20.126 [2024-12-05 20:49:13.377440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ede470 00:29:20.126 [2024-12-05 20:49:13.378370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.126 [2024-12-05 20:49:13.378388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:20.126 [2024-12-05 20:49:13.385596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ede470 00:29:20.126 [2024-12-05 20:49:13.386511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.126 [2024-12-05 20:49:13.386528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.126 [2024-12-05 20:49:13.393733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef9f68 00:29:20.126 [2024-12-05 20:49:13.394649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.126 [2024-12-05 20:49:13.394667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:20.126 [2024-12-05 20:49:13.401306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee38d0 00:29:20.126 [2024-12-05 20:49:13.402554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.126 [2024-12-05 20:49:13.402571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:20.126 [2024-12-05 20:49:13.408348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efc128 00:29:20.126 [2024-12-05 20:49:13.409041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.126 [2024-12-05 20:49:13.409062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:20.126 [2024-12-05 20:49:13.417718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee5ec8 00:29:20.126 [2024-12-05 20:49:13.418532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.126 [2024-12-05 20:49:13.418551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:20.126 [2024-12-05 20:49:13.426274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eef270 00:29:20.126 [2024-12-05 20:49:13.427177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.126 [2024-12-05 20:49:13.427195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:20.126 [2024-12-05 20:49:13.434676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efd208 00:29:20.126 [2024-12-05 20:49:13.435658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.126 [2024-12-05 20:49:13.435676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:20.126 [2024-12-05 20:49:13.442936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef4b08 00:29:20.126 [2024-12-05 20:49:13.443885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.126 [2024-12-05 20:49:13.443901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:20.126 [2024-12-05 20:49:13.451122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef31b8 00:29:20.126 [2024-12-05 20:49:13.452044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.126 [2024-12-05 20:49:13.452064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:20.126 [2024-12-05 20:49:13.459355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee5220 00:29:20.126 [2024-12-05 20:49:13.460237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.126 [2024-12-05 20:49:13.460254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:20.126 [2024-12-05 20:49:13.467547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eebb98 00:29:20.126 [2024-12-05 20:49:13.468466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.126 [2024-12-05 20:49:13.468483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:20.126 [2024-12-05 20:49:13.475756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee8088 00:29:20.126 [2024-12-05 20:49:13.476676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.126 [2024-12-05 20:49:13.476692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:20.126 [2024-12-05 20:49:13.483955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef96f8 00:29:20.126 [2024-12-05 20:49:13.484873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.126 [2024-12-05 20:49:13.484889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:20.126 [2024-12-05 20:49:13.492165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eff3c8 00:29:20.126 [2024-12-05 20:49:13.493061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.126 [2024-12-05 20:49:13.493077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:20.126 [2024-12-05 20:49:13.500403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef6cc8 00:29:20.126 [2024-12-05 20:49:13.501286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.126 [2024-12-05 20:49:13.501303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:20.126 [2024-12-05 20:49:13.508590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef1ca0 00:29:20.126 [2024-12-05 20:49:13.509515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.126 [2024-12-05 20:49:13.509531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:20.126 [2024-12-05 20:49:13.516757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee01f8 00:29:20.126 [2024-12-05 20:49:13.517678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.126 [2024-12-05 20:49:13.517694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:20.126 [2024-12-05 20:49:13.524976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eeb328 00:29:20.126 [2024-12-05 20:49:13.525869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.126 [2024-12-05 20:49:13.525886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:20.126 [2024-12-05 20:49:13.533158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef4298 00:29:20.126 [2024-12-05 20:49:13.534111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.126 [2024-12-05 20:49:13.534128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:20.126 [2024-12-05 20:49:13.541428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eeee38 00:29:20.126 [2024-12-05 20:49:13.542337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.126 [2024-12-05 20:49:13.542353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:20.126 [2024-12-05 20:49:13.549625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eeaab8 00:29:20.127 [2024-12-05 20:49:13.550511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.127 [2024-12-05 20:49:13.550527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:20.127 [2024-12-05 20:49:13.557827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef6020 00:29:20.127 [2024-12-05 20:49:13.558744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.127 [2024-12-05 20:49:13.558760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:20.386 [2024-12-05 20:49:13.566276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee27f0 00:29:20.386 [2024-12-05 20:49:13.567258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.386 [2024-12-05 20:49:13.567275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:20.386 [2024-12-05 20:49:13.574619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016edf550 00:29:20.386 [2024-12-05 20:49:13.575519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.386 [2024-12-05 20:49:13.575536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:20.386 [2024-12-05 20:49:13.582799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef2510 00:29:20.386 [2024-12-05 20:49:13.583695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.387 [2024-12-05 20:49:13.583712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:20.387 [2024-12-05 20:49:13.590988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef7da8 00:29:20.387 [2024-12-05 20:49:13.591907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.387 [2024-12-05 20:49:13.591923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:20.387 [2024-12-05 20:49:13.599209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ede8a8 00:29:20.387 [2024-12-05 20:49:13.600120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.387 [2024-12-05 20:49:13.600137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:20.387 [2024-12-05 20:49:13.607452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eebfd0 00:29:20.387 [2024-12-05 20:49:13.608405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.387 [2024-12-05 20:49:13.608426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:20.387 [2024-12-05 20:49:13.615745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee7c50 00:29:20.387 [2024-12-05 20:49:13.616664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.387 [2024-12-05 20:49:13.616682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:20.387 [2024-12-05 20:49:13.623930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef9b30 00:29:20.387 [2024-12-05 20:49:13.624809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.387 [2024-12-05 20:49:13.624825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:20.387 30641.00 IOPS, 119.69 MiB/s [2024-12-05T19:49:13.828Z] [2024-12-05 20:49:13.632111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efeb58 00:29:20.387 [2024-12-05 20:49:13.633015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.387 [2024-12-05 20:49:13.633033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:20.387 [2024-12-05 20:49:13.640282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efeb58 00:29:20.387 [2024-12-05 20:49:13.641197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.387 [2024-12-05 20:49:13.641215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:20.387 [2024-12-05 20:49:13.648525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efeb58 00:29:20.387 [2024-12-05 20:49:13.649419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.387 [2024-12-05 20:49:13.649436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:20.387 [2024-12-05 20:49:13.656713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efeb58 00:29:20.387 [2024-12-05 20:49:13.657602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.387 [2024-12-05 20:49:13.657618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:20.387 [2024-12-05 20:49:13.664940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efeb58 00:29:20.387 [2024-12-05 20:49:13.665859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.387 [2024-12-05 20:49:13.665876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:20.387 [2024-12-05 20:49:13.673021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef1430 00:29:20.387 [2024-12-05 20:49:13.673838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.387 [2024-12-05 20:49:13.673855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:20.387 [2024-12-05 20:49:13.681387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eefae0 00:29:20.387 [2024-12-05 20:49:13.682187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.387 [2024-12-05 20:49:13.682205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:20.387 [2024-12-05 20:49:13.689119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eeff18 00:29:20.387 [2024-12-05 20:49:13.689806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.387 [2024-12-05 20:49:13.689825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:20.387 [2024-12-05 20:49:13.697099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef0bc0 00:29:20.387 [2024-12-05 20:49:13.697740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.387 [2024-12-05 20:49:13.697756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:20.387 [2024-12-05 20:49:13.706262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef81e0 00:29:20.387 [2024-12-05 20:49:13.707084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.387 [2024-12-05 20:49:13.707102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:20.387 [2024-12-05 20:49:13.714486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef3e60 00:29:20.387 [2024-12-05 20:49:13.715271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.387 [2024-12-05 20:49:13.715288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:20.387 [2024-12-05 20:49:13.722709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef2d80 00:29:20.387 [2024-12-05 20:49:13.723471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.387 [2024-12-05 20:49:13.723488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:20.387 [2024-12-05 20:49:13.730900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee12d8 00:29:20.387 [2024-12-05 20:49:13.731666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.387 [2024-12-05 20:49:13.731684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:20.387 [2024-12-05 20:49:13.739158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eeea00 00:29:20.387 [2024-12-05 20:49:13.739969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.387 [2024-12-05 20:49:13.739987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:20.387 [2024-12-05 20:49:13.747420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef5378 00:29:20.387 [2024-12-05 20:49:13.748223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.387 [2024-12-05 20:49:13.748240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:20.387 [2024-12-05 20:49:13.755631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef7100 00:29:20.387 [2024-12-05 20:49:13.756384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.387 [2024-12-05 20:49:13.756401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:20.387 [2024-12-05 20:49:13.764201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee2c28 00:29:20.387 [2024-12-05 20:49:13.765083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.387 [2024-12-05 20:49:13.765101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:20.387 [2024-12-05 20:49:13.772014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef2948 00:29:20.387 [2024-12-05 20:49:13.772900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.387 [2024-12-05 20:49:13.772917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:20.387 [2024-12-05 20:49:13.780632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef5378 00:29:20.387 [2024-12-05 20:49:13.781609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.387 [2024-12-05 20:49:13.781626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:20.388 [2024-12-05 20:49:13.789181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef7970 00:29:20.388 [2024-12-05 20:49:13.790311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.388 [2024-12-05 20:49:13.790329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:20.388 [2024-12-05 20:49:13.796488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ede8a8 00:29:20.388 [2024-12-05 20:49:13.796957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.388 [2024-12-05 20:49:13.796974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:20.388 [2024-12-05 20:49:13.804801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee9e10 00:29:20.388 [2024-12-05 20:49:13.805495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.388 [2024-12-05 20:49:13.805512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:20.388 [2024-12-05 20:49:13.813398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee8088 00:29:20.388 [2024-12-05 20:49:13.814258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.388 [2024-12-05 20:49:13.814275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:20.388 [2024-12-05 20:49:13.822067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee4578 00:29:20.388 [2024-12-05 20:49:13.823086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.388 [2024-12-05 20:49:13.823108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:20.647 [2024-12-05 20:49:13.830628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efb048 00:29:20.647 [2024-12-05 20:49:13.831682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.647 [2024-12-05 20:49:13.831700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:20.647 [2024-12-05 20:49:13.838869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef6890 00:29:20.647 [2024-12-05 20:49:13.839864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.647 [2024-12-05 20:49:13.839882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:20.647 [2024-12-05 20:49:13.847026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee3060 00:29:20.647 [2024-12-05 20:49:13.848022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.647 [2024-12-05 20:49:13.848039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:20.647 [2024-12-05 20:49:13.855239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee5220 00:29:20.647 [2024-12-05 20:49:13.856265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.648 [2024-12-05 20:49:13.856281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:20.648 [2024-12-05 20:49:13.863475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efef90 00:29:20.648 [2024-12-05 20:49:13.864501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.648 [2024-12-05 20:49:13.864518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:20.648 [2024-12-05 20:49:13.871662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef46d0 00:29:20.648 [2024-12-05 20:49:13.872745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.648 [2024-12-05 20:49:13.872762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:20.648 [2024-12-05 20:49:13.879933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef31b8 00:29:20.648 [2024-12-05 20:49:13.880959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.648 [2024-12-05 20:49:13.880975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:20.648 [2024-12-05 20:49:13.888142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef1430 00:29:20.648 [2024-12-05 20:49:13.889126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.648 [2024-12-05 20:49:13.889143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:20.648 [2024-12-05 20:49:13.896401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eea680 00:29:20.648 [2024-12-05 20:49:13.897402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.648 [2024-12-05 20:49:13.897420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:20.648 [2024-12-05 20:49:13.904617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee73e0 00:29:20.648 [2024-12-05 20:49:13.905633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.648 [2024-12-05 20:49:13.905650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:20.648 [2024-12-05 20:49:13.912827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efc128 00:29:20.648 [2024-12-05 20:49:13.913821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.648 [2024-12-05 20:49:13.913837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:20.648 [2024-12-05 20:49:13.921008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee0ea0 00:29:20.648 [2024-12-05 20:49:13.922067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.648 [2024-12-05 20:49:13.922086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:20.648 [2024-12-05 20:49:13.930484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee23b8 00:29:20.648 [2024-12-05 20:49:13.931936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.648 [2024-12-05 20:49:13.931952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:20.648 [2024-12-05 20:49:13.936407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efcdd0 00:29:20.648 [2024-12-05 20:49:13.937084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.648 [2024-12-05 20:49:13.937101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:20.648 [2024-12-05 20:49:13.947007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef7da8 00:29:20.648 [2024-12-05 20:49:13.948384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.648 [2024-12-05 20:49:13.948401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:20.648 [2024-12-05 20:49:13.954207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee0630 00:29:20.648 [2024-12-05 20:49:13.955129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.648 [2024-12-05 20:49:13.955146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:20.648 [2024-12-05 20:49:13.962298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efe2e8 00:29:20.648 [2024-12-05 20:49:13.963222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.648 [2024-12-05 20:49:13.963239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:20.648 [2024-12-05 20:49:13.970521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee3498 00:29:20.648 [2024-12-05 20:49:13.971422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.648 [2024-12-05 20:49:13.971439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:20.648 [2024-12-05 20:49:13.978197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef81e0 00:29:20.648 [2024-12-05 20:49:13.979083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.648 [2024-12-05 20:49:13.979100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:20.648 [2024-12-05 20:49:13.987266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee49b0 00:29:20.648 [2024-12-05 20:49:13.988064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.648 [2024-12-05 20:49:13.988081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:20.648 [2024-12-05 20:49:13.995439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee49b0 00:29:20.648 [2024-12-05 20:49:13.996304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.648 [2024-12-05 20:49:13.996321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:20.648 [2024-12-05 20:49:14.003641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee49b0 00:29:20.648 [2024-12-05 20:49:14.004507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.648 [2024-12-05 20:49:14.004525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:20.648 [2024-12-05 20:49:14.012083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efef90 00:29:20.648 [2024-12-05 20:49:14.012736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.648 [2024-12-05 20:49:14.012754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:20.648 [2024-12-05 20:49:14.019825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ede038 00:29:20.648 [2024-12-05 20:49:14.020976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.648 [2024-12-05 20:49:14.020994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:20.648 [2024-12-05 20:49:14.027421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee73e0 00:29:20.648 [2024-12-05 20:49:14.028088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.648 [2024-12-05 20:49:14.028105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:20.648 [2024-12-05 20:49:14.035565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee38d0 00:29:20.648 [2024-12-05 20:49:14.036251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.648 [2024-12-05 20:49:14.036270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:20.649 [2024-12-05 20:49:14.043821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee9168 00:29:20.649 [2024-12-05 20:49:14.044471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.649 [2024-12-05 20:49:14.044488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:20.649 [2024-12-05 20:49:14.052032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee9e10 00:29:20.649 [2024-12-05 20:49:14.052622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.649 [2024-12-05 20:49:14.052639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:20.649 [2024-12-05 20:49:14.059749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef5be8 00:29:20.649 [2024-12-05 20:49:14.060419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.649 [2024-12-05 20:49:14.060436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:20.649 [2024-12-05 20:49:14.068440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eeaab8 00:29:20.649 [2024-12-05 20:49:14.069184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.649 [2024-12-05 20:49:14.069200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:20.649 [2024-12-05 20:49:14.076815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef57b0 00:29:20.649 [2024-12-05 20:49:14.077558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.649 [2024-12-05 20:49:14.077575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:20.908 [2024-12-05 20:49:14.087064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef0350 00:29:20.908 [2024-12-05 20:49:14.088305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.908 [2024-12-05 20:49:14.088322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:20.908 [2024-12-05 20:49:14.095569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee5220 00:29:20.908 [2024-12-05 20:49:14.096522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.908 [2024-12-05 20:49:14.096540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:20.908 [2024-12-05 20:49:14.103746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee5220 00:29:20.908 [2024-12-05 20:49:14.104755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.908 [2024-12-05 20:49:14.104772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:20.908 [2024-12-05 20:49:14.111940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee5220 00:29:20.908 [2024-12-05 20:49:14.112991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.908 [2024-12-05 20:49:14.113009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:20.908 [2024-12-05 20:49:14.120206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee5220 00:29:20.908 [2024-12-05 20:49:14.121244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.908 [2024-12-05 20:49:14.121261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:20.908 [2024-12-05 20:49:14.128414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee5220 00:29:20.908 [2024-12-05 20:49:14.129328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.909 [2024-12-05 20:49:14.129346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:20.909 [2024-12-05 20:49:14.136669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efb480 00:29:20.909 [2024-12-05 20:49:14.137590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.909 [2024-12-05 20:49:14.137607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:20.909 [2024-12-05 20:49:14.145308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eee190 00:29:20.909 [2024-12-05 20:49:14.146404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.909 [2024-12-05 20:49:14.146421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:20.909 [2024-12-05 20:49:14.152408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eebfd0 00:29:20.909 [2024-12-05 20:49:14.152845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.909 [2024-12-05 20:49:14.152862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:20.909 [2024-12-05 20:49:14.160818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee3498 00:29:20.909 [2024-12-05 20:49:14.161624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.909 [2024-12-05 20:49:14.161641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:20.909 [2024-12-05 20:49:14.169102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee6b70 00:29:20.909 [2024-12-05 20:49:14.169880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.909 [2024-12-05 20:49:14.169897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:20.909 [2024-12-05 20:49:14.177359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efa3a0 00:29:20.909 [2024-12-05 20:49:14.178155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.909 [2024-12-05 20:49:14.178172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:20.909 [2024-12-05 20:49:14.185846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee1f80 00:29:20.909 [2024-12-05 20:49:14.186627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.909 [2024-12-05 20:49:14.186644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:20.909 [2024-12-05 20:49:14.195211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef1ca0 00:29:20.909 [2024-12-05 20:49:14.196351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.909 [2024-12-05 20:49:14.196369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:20.909 [2024-12-05 20:49:14.202427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eeea00 00:29:20.909 [2024-12-05 20:49:14.203152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.909 [2024-12-05 20:49:14.203168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:20.909 [2024-12-05 20:49:14.210632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eeea00 00:29:20.909 [2024-12-05 20:49:14.211302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.909 [2024-12-05 20:49:14.211318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:20.909 [2024-12-05 20:49:14.218261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eed920 00:29:20.909 [2024-12-05 20:49:14.219019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.909 [2024-12-05 20:49:14.219037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:20.909 [2024-12-05 20:49:14.226897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef4b08 00:29:20.909 [2024-12-05 20:49:14.227776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.909 [2024-12-05 20:49:14.227793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:20.909 [2024-12-05 20:49:14.235468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016edf550 00:29:20.909 [2024-12-05 20:49:14.235924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.909 [2024-12-05 20:49:14.235941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:20.909 [2024-12-05 20:49:14.244206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee8088 00:29:20.909 [2024-12-05 20:49:14.244755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.909 [2024-12-05 20:49:14.244773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:20.909 [2024-12-05 20:49:14.252841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee7818 00:29:20.909 [2024-12-05 20:49:14.253604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.909 [2024-12-05 20:49:14.253624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:20.909 [2024-12-05 20:49:14.261393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef2510 00:29:20.909 [2024-12-05 20:49:14.262366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.909 [2024-12-05 20:49:14.262383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:20.909 [2024-12-05 20:49:14.269666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efeb58 00:29:20.909 [2024-12-05 20:49:14.270650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.909 [2024-12-05 20:49:14.270666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:20.909 [2024-12-05 20:49:14.279006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee4de8 00:29:20.909 [2024-12-05 20:49:14.280328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.909 [2024-12-05 20:49:14.280344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:20.909 [2024-12-05 20:49:14.287394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef5378 00:29:20.909 [2024-12-05 20:49:14.288774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.909 [2024-12-05 20:49:14.288792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:20.909 [2024-12-05 20:49:14.293227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eeaef0 00:29:20.909 [2024-12-05 20:49:14.293849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.909 [2024-12-05 20:49:14.293866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:20.909 [2024-12-05 20:49:14.302895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee12d8 00:29:20.909 [2024-12-05 20:49:14.303889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.909 [2024-12-05 20:49:14.303907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:20.909 [2024-12-05 20:49:14.311050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef6890 00:29:20.909 [2024-12-05 20:49:14.312039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.909 [2024-12-05 20:49:14.312056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:20.909 [2024-12-05 20:49:14.319288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef7538 00:29:20.909 [2024-12-05 20:49:14.320282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.909 [2024-12-05 20:49:14.320299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:20.909 [2024-12-05 20:49:14.327522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efac10 00:29:20.909 [2024-12-05 20:49:14.328522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.909 [2024-12-05 20:49:14.328539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:20.909 [2024-12-05 20:49:14.335768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee23b8 00:29:20.909 [2024-12-05 20:49:14.336739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.909 [2024-12-05 20:49:14.336757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:20.909 [2024-12-05 20:49:14.343562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee3d08 00:29:20.909 [2024-12-05 20:49:14.344562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.909 [2024-12-05 20:49:14.344580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:21.169 [2024-12-05 20:49:14.352617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016edf118 00:29:21.169 [2024-12-05 20:49:14.353614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-12-05 20:49:14.353642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:21.169 [2024-12-05 20:49:14.360821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eebfd0 00:29:21.169 [2024-12-05 20:49:14.361866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-12-05 20:49:14.361882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:21.169 [2024-12-05 20:49:14.368445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eeb328 00:29:21.169 [2024-12-05 20:49:14.369093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-12-05 20:49:14.369110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:21.169 [2024-12-05 20:49:14.376551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee1710 00:29:21.169 [2024-12-05 20:49:14.377303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-12-05 20:49:14.377320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:21.169 [2024-12-05 20:49:14.384810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efa7d8 00:29:21.169 [2024-12-05 20:49:14.385554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-12-05 20:49:14.385571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:21.169 [2024-12-05 20:49:14.392999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee2c28 00:29:21.169 [2024-12-05 20:49:14.393760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-12-05 20:49:14.393776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:21.169 [2024-12-05 20:49:14.401192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef4f40 00:29:21.169 [2024-12-05 20:49:14.401834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-12-05 20:49:14.401851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:21.169 [2024-12-05 20:49:14.409363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef1ca0 00:29:21.169 [2024-12-05 20:49:14.410033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-12-05 20:49:14.410051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:21.169 [2024-12-05 20:49:14.417629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef1ca0 00:29:21.169 [2024-12-05 20:49:14.418413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-12-05 20:49:14.418432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:21.169 [2024-12-05 20:49:14.425837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef1ca0 00:29:21.169 [2024-12-05 20:49:14.426528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-12-05 20:49:14.426547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:21.170 [2024-12-05 20:49:14.434099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef1ca0 00:29:21.170 [2024-12-05 20:49:14.434780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.170 [2024-12-05 20:49:14.434799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:21.170 [2024-12-05 20:49:14.441914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee7c50 00:29:21.170 [2024-12-05 20:49:14.442691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.170 [2024-12-05 20:49:14.442708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:21.170 [2024-12-05 20:49:14.451178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efa7d8 00:29:21.170 [2024-12-05 20:49:14.452079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.170 [2024-12-05 20:49:14.452097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:21.170 [2024-12-05 20:49:14.459478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efc998 00:29:21.170 [2024-12-05 20:49:14.460271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.170 [2024-12-05 20:49:14.460288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:21.170 [2024-12-05 20:49:14.468069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee7818 00:29:21.170 [2024-12-05 20:49:14.469009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.170 [2024-12-05 20:49:14.469026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:21.170 [2024-12-05 20:49:14.476703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef6020 00:29:21.170 [2024-12-05 20:49:14.477674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.170 [2024-12-05 20:49:14.477691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:21.170 [2024-12-05 20:49:14.484496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef6458 00:29:21.170 [2024-12-05 20:49:14.485366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.170 [2024-12-05 20:49:14.485384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:21.170 [2024-12-05 20:49:14.492668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee5658 00:29:21.170 [2024-12-05 20:49:14.493748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.170 [2024-12-05 20:49:14.493765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:21.170 [2024-12-05 20:49:14.501315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016efb048 00:29:21.170 [2024-12-05 20:49:14.502473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.170 [2024-12-05 20:49:14.502490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:21.170 [2024-12-05 20:49:14.509626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eed920 00:29:21.170 [2024-12-05 20:49:14.510720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.170 [2024-12-05 20:49:14.510737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:21.170 [2024-12-05 20:49:14.516537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee95a0 00:29:21.170 [2024-12-05 20:49:14.516961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.170 [2024-12-05 20:49:14.516978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:21.170 [2024-12-05 20:49:14.525791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eeb328 00:29:21.170 [2024-12-05 20:49:14.526852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.170 [2024-12-05 20:49:14.526870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.170 [2024-12-05 20:49:14.533467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eed920 00:29:21.170 [2024-12-05 20:49:14.534407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.170 [2024-12-05 20:49:14.534423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:21.170 [2024-12-05 20:49:14.541869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eed920 00:29:21.170 [2024-12-05 20:49:14.542863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.170 [2024-12-05 20:49:14.542885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:21.170 [2024-12-05 20:49:14.550441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee0630 00:29:21.170 [2024-12-05 20:49:14.551293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.170 [2024-12-05 20:49:14.551310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:21.170 [2024-12-05 20:49:14.559759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee4578 00:29:21.170 [2024-12-05 20:49:14.561138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.170 [2024-12-05 20:49:14.561154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:21.170 [2024-12-05 20:49:14.565633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef31b8 00:29:21.170 [2024-12-05 20:49:14.566263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.170 [2024-12-05 20:49:14.566280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:21.170 [2024-12-05 20:49:14.573983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016eec840 00:29:21.170 [2024-12-05 20:49:14.574519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.170 [2024-12-05 20:49:14.574536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:21.170 [2024-12-05 20:49:14.582257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef0350 00:29:21.170 [2024-12-05 20:49:14.582789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.170 [2024-12-05 20:49:14.582805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:21.170 [2024-12-05 20:49:14.591509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef7100 00:29:21.170 [2024-12-05 20:49:14.592531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.170 [2024-12-05 20:49:14.592548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:21.170 [2024-12-05 20:49:14.599922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ee6fa8 00:29:21.170 [2024-12-05 20:49:14.600971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.170 [2024-12-05 20:49:14.600987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:21.429 [2024-12-05 20:49:14.608619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef81e0 00:29:21.429 [2024-12-05 20:49:14.609598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.429 [2024-12-05 20:49:14.609615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:21.429 [2024-12-05 20:49:14.616953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016ef8618 00:29:21.429 [2024-12-05 20:49:14.618145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.429 [2024-12-05 20:49:14.618161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:21.429 [2024-12-05 20:49:14.625327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6098f0) with pdu=0x200016edf988 00:29:21.429 [2024-12-05 20:49:14.626495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.429 [2024-12-05 20:49:14.626512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.429 30775.00 IOPS, 120.21 MiB/s 00:29:21.429 Latency(us) 00:29:21.429 [2024-12-05T19:49:14.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.429 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:21.429 nvme0n1 : 2.00 30798.59 120.31 0.00 0.00 4151.56 1951.19 10902.81 00:29:21.430 [2024-12-05T19:49:14.871Z] =================================================================================================================== 00:29:21.430 [2024-12-05T19:49:14.871Z] Total : 30798.59 120.31 0.00 0.00 4151.56 1951.19 10902.81 00:29:21.430 { 00:29:21.430 "results": [ 00:29:21.430 { 00:29:21.430 "job": "nvme0n1", 00:29:21.430 "core_mask": "0x2", 00:29:21.430 "workload": "randwrite", 00:29:21.430 "status": "finished", 00:29:21.430 "queue_depth": 128, 00:29:21.430 "io_size": 4096, 00:29:21.430 "runtime": 2.002624, 00:29:21.430 "iops": 30798.592246971974, 00:29:21.430 "mibps": 120.30700096473427, 00:29:21.430 "io_failed": 0, 00:29:21.430 "io_timeout": 0, 00:29:21.430 "avg_latency_us": 4151.563309976447, 00:29:21.430 "min_latency_us": 1951.1854545454546, 00:29:21.430 "max_latency_us": 10902.807272727272 00:29:21.430 } 00:29:21.430 ], 00:29:21.430 "core_count": 1 00:29:21.430 } 00:29:21.430 20:49:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:21.430 20:49:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:21.430 20:49:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:21.430 | .driver_specific 00:29:21.430 | .nvme_error 00:29:21.430 | .status_code 00:29:21.430 | .command_transient_transport_error' 00:29:21.430 20:49:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:21.430 20:49:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 241 > 0 )) 00:29:21.430 20:49:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 524099 00:29:21.430 20:49:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 524099 ']' 00:29:21.430 20:49:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 524099 00:29:21.430 20:49:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:21.430 20:49:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:21.430 20:49:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 524099 00:29:21.688 20:49:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:21.688 20:49:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:21.688 20:49:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 524099' 00:29:21.688 killing process with pid 524099 00:29:21.688 20:49:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 524099 00:29:21.688 Received shutdown signal, test time was about 2.000000 seconds 00:29:21.688 00:29:21.688 Latency(us) 00:29:21.688 [2024-12-05T19:49:15.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.688 [2024-12-05T19:49:15.129Z] =================================================================================================================== 00:29:21.688 [2024-12-05T19:49:15.129Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:21.688 20:49:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 524099 00:29:21.688 20:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:21.688 20:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:21.688 20:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:21.688 20:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:21.688 20:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:21.688 20:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=524643 00:29:21.689 20:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 524643 /var/tmp/bperf.sock 00:29:21.689 20:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:21.689 20:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 524643 ']' 00:29:21.689 20:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:21.689 20:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:21.689 20:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:21.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:21.689 20:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:21.689 20:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:21.689 [2024-12-05 20:49:15.085790] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:29:21.689 [2024-12-05 20:49:15.085840] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524643 ] 00:29:21.689 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:21.689 Zero copy mechanism will not be used. 00:29:21.946 [2024-12-05 20:49:15.159671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.946 [2024-12-05 20:49:15.198318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.946 20:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:21.946 20:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:21.946 20:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:21.946 20:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:22.202 20:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:22.202 20:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.202 20:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:22.202 20:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.203 20:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:22.203 20:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:22.460 nvme0n1 00:29:22.460 20:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:22.460 20:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.460 20:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:22.719 20:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.719 20:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:22.719 20:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:22.719 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:22.719 Zero copy mechanism will not be used. 00:29:22.719 Running I/O for 2 seconds... 00:29:22.719 [2024-12-05 20:49:15.992320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.719 [2024-12-05 20:49:15.992420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.719 [2024-12-05 20:49:15.992446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.719 [2024-12-05 20:49:15.997131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.719 [2024-12-05 20:49:15.997244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.719 [2024-12-05 20:49:15.997265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.719 [2024-12-05 20:49:16.002779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.719 [2024-12-05 20:49:16.003101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.719 [2024-12-05 20:49:16.003121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.719 [2024-12-05 20:49:16.009517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.719 [2024-12-05 20:49:16.009703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.719 [2024-12-05 20:49:16.009721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:22.719 [2024-12-05 20:49:16.016552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.719 [2024-12-05 20:49:16.016899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.719 [2024-12-05 20:49:16.016919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.719 [2024-12-05 20:49:16.023143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.719 [2024-12-05 20:49:16.023374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.719 [2024-12-05 20:49:16.023393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.719 [2024-12-05 20:49:16.028360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.719 [2024-12-05 20:49:16.028613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.719 [2024-12-05 20:49:16.028632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.719 [2024-12-05 20:49:16.033168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.719 [2024-12-05 20:49:16.033425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.719 [2024-12-05 20:49:16.033443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:22.719 [2024-12-05 20:49:16.037507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.719 [2024-12-05 20:49:16.037732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.719 [2024-12-05 20:49:16.037751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.719 [2024-12-05 20:49:16.041709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.719 [2024-12-05 20:49:16.041979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.719 [2024-12-05 20:49:16.041997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.719 [2024-12-05 20:49:16.045789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.719 [2024-12-05 20:49:16.046028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.719 [2024-12-05 20:49:16.046045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.719 [2024-12-05 20:49:16.049653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.719 [2024-12-05 20:49:16.049894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.719 [2024-12-05 20:49:16.049912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:22.719 [2024-12-05 20:49:16.053914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.719 [2024-12-05 20:49:16.054165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.719 [2024-12-05 20:49:16.054183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.719 [2024-12-05 20:49:16.058174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.719 [2024-12-05 20:49:16.058430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.719 [2024-12-05 20:49:16.058447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.719 [2024-12-05 20:49:16.062465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.719 [2024-12-05 20:49:16.062726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.719 [2024-12-05 20:49:16.062747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.719 [2024-12-05 20:49:16.066860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.719 [2024-12-05 20:49:16.067100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.719 [2024-12-05 20:49:16.067118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:22.720 [2024-12-05 20:49:16.070725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.720 [2024-12-05 20:49:16.070965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.720 [2024-12-05 20:49:16.070982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.720 [2024-12-05 20:49:16.075099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.720 [2024-12-05 20:49:16.075354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.720 [2024-12-05 20:49:16.075372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.720 [2024-12-05 20:49:16.079339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.720 [2024-12-05 20:49:16.079594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.720 [2024-12-05 20:49:16.079612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.720 [2024-12-05 20:49:16.083363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.720 [2024-12-05 20:49:16.083624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.720 [2024-12-05 20:49:16.083642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:22.720 [2024-12-05 20:49:16.087363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.720 [2024-12-05 20:49:16.087612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.720 [2024-12-05 20:49:16.087630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.720 [2024-12-05 20:49:16.091355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.720 [2024-12-05 20:49:16.091602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.720 [2024-12-05 20:49:16.091621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.720 [2024-12-05 20:49:16.095435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.720 [2024-12-05 20:49:16.095681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.720 [2024-12-05 20:49:16.095699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.720 [2024-12-05 20:49:16.099997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.720 [2024-12-05 20:49:16.100245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.720 [2024-12-05 20:49:16.100263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:22.720 [2024-12-05 20:49:16.103796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.720 [2024-12-05 20:49:16.104038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.720 [2024-12-05 20:49:16.104056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.720 [2024-12-05 20:49:16.107663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.720 [2024-12-05 20:49:16.107898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.720 [2024-12-05 20:49:16.107916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.720 [2024-12-05 20:49:16.111511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.720 [2024-12-05 20:49:16.111754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.720 [2024-12-05 20:49:16.111772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.720 [2024-12-05 20:49:16.115353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.720 [2024-12-05 20:49:16.115595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.720 [2024-12-05 20:49:16.115613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:22.720 [2024-12-05 20:49:16.119312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.720 [2024-12-05 20:49:16.119554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.720 [2024-12-05 20:49:16.119572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.720 [2024-12-05 20:49:16.123187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.720 [2024-12-05 20:49:16.123427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.720 [2024-12-05 20:49:16.123445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.720 [2024-12-05 20:49:16.127067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.720 [2024-12-05 20:49:16.127313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.720 [2024-12-05 20:49:16.127331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.720 [2024-12-05 20:49:16.130971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.720 [2024-12-05 20:49:16.131206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.720 [2024-12-05 20:49:16.131224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:22.720 [2024-12-05 20:49:16.134875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.720 [2024-12-05 20:49:16.135112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.720 [2024-12-05 20:49:16.135130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.720 [2024-12-05 20:49:16.138723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.720 [2024-12-05 20:49:16.138954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.720 [2024-12-05 20:49:16.138972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.720 [2024-12-05 20:49:16.142604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.720 [2024-12-05 20:49:16.142849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.720 [2024-12-05 20:49:16.142866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.720 [2024-12-05 20:49:16.146417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.720 [2024-12-05 20:49:16.146675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.720 [2024-12-05 20:49:16.146693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:22.720 [2024-12-05 20:49:16.150279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.720 [2024-12-05 20:49:16.150521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.720 [2024-12-05 20:49:16.150539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.720 [2024-12-05 20:49:16.154178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.720 [2024-12-05 20:49:16.154430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.720 [2024-12-05 20:49:16.154448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.720 [2024-12-05 20:49:16.158035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.980 [2024-12-05 20:49:16.158284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.980 [2024-12-05 20:49:16.158303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.980 [2024-12-05 20:49:16.161980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.980 [2024-12-05 20:49:16.162226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.980 [2024-12-05 20:49:16.162244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:22.980 [2024-12-05 20:49:16.165885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.980 [2024-12-05 20:49:16.166140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.980 [2024-12-05 20:49:16.166161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.980 [2024-12-05 20:49:16.169784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.980 [2024-12-05 20:49:16.170042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.980 [2024-12-05 20:49:16.170066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.980 [2024-12-05 20:49:16.173921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.980 [2024-12-05 20:49:16.174196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.980 [2024-12-05 20:49:16.174214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.980 [2024-12-05 20:49:16.179382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.980 [2024-12-05 20:49:16.179667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.980 [2024-12-05 20:49:16.179685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:22.980 [2024-12-05 20:49:16.185293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.980 [2024-12-05 20:49:16.185584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.980 [2024-12-05 20:49:16.185602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.980 [2024-12-05 20:49:16.190291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.980 [2024-12-05 20:49:16.190528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.980 [2024-12-05 20:49:16.190547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.980 [2024-12-05 20:49:16.195226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.980 [2024-12-05 20:49:16.195458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.980 [2024-12-05 20:49:16.195476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.980 [2024-12-05 20:49:16.200163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.980 [2024-12-05 20:49:16.200409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.980 [2024-12-05 20:49:16.200427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:22.980 [2024-12-05 20:49:16.204905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.980 [2024-12-05 20:49:16.205219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.980 [2024-12-05 20:49:16.205237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.980 [2024-12-05 20:49:16.209995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.980 [2024-12-05 20:49:16.210270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.981 [2024-12-05 20:49:16.210287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.981 [2024-12-05 20:49:16.214894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.981 [2024-12-05 20:49:16.215163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.981 [2024-12-05 20:49:16.215182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.981 [2024-12-05 20:49:16.219775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.981 [2024-12-05 20:49:16.220090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.981 [2024-12-05 20:49:16.220109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:22.981 [2024-12-05 20:49:16.224589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.981 [2024-12-05 20:49:16.224854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.981 [2024-12-05 20:49:16.224872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.981 [2024-12-05 20:49:16.229286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.981 [2024-12-05 20:49:16.229519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.981 [2024-12-05 20:49:16.229537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.981 [2024-12-05 20:49:16.234064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.981 [2024-12-05 20:49:16.234316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.981 [2024-12-05 20:49:16.234333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.981 [2024-12-05 20:49:16.239212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.981 [2024-12-05 20:49:16.239493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.981 [2024-12-05 20:49:16.239512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:22.981 [2024-12-05 20:49:16.244316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.981 [2024-12-05 20:49:16.244564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.981 [2024-12-05 20:49:16.244582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.981 [2024-12-05 20:49:16.249064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.981 [2024-12-05 20:49:16.249307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.981 [2024-12-05 20:49:16.249325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.981 [2024-12-05 20:49:16.254150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.981 [2024-12-05 20:49:16.254393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.981 [2024-12-05 20:49:16.254411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.981 [2024-12-05 20:49:16.258922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.981 [2024-12-05 20:49:16.259198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.981 [2024-12-05 20:49:16.259216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:22.981 [2024-12-05 20:49:16.263717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.981 [2024-12-05 20:49:16.263936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.981 [2024-12-05 20:49:16.263954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.981 [2024-12-05 20:49:16.267540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.981 [2024-12-05 20:49:16.267774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.981 [2024-12-05 20:49:16.267792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.981 [2024-12-05 20:49:16.271490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.981 [2024-12-05 20:49:16.271724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.981 [2024-12-05 20:49:16.271742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.981 [2024-12-05 20:49:16.275388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.981 [2024-12-05 20:49:16.275612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.981 [2024-12-05 20:49:16.275630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:22.981 [2024-12-05 20:49:16.279158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.981 [2024-12-05 20:49:16.279368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.981 [2024-12-05 20:49:16.279386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.981 [2024-12-05 20:49:16.283093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.981 [2024-12-05 20:49:16.283300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.981 [2024-12-05 20:49:16.283317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.982 [2024-12-05 20:49:16.286733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.982 [2024-12-05 20:49:16.286937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.982 [2024-12-05 20:49:16.286958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.982 [2024-12-05 20:49:16.290453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.982 [2024-12-05 20:49:16.290659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.982 [2024-12-05 20:49:16.290677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:22.982 [2024-12-05 20:49:16.294183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.982 [2024-12-05 20:49:16.294424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.982 [2024-12-05 20:49:16.294442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.982 [2024-12-05 20:49:16.297991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.982 [2024-12-05 20:49:16.298220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.982 [2024-12-05 20:49:16.298238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.982 [2024-12-05 20:49:16.301717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.982 [2024-12-05 20:49:16.301929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.982 [2024-12-05 20:49:16.301951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.982 [2024-12-05 20:49:16.305498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.982 [2024-12-05 20:49:16.305700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.982 [2024-12-05 20:49:16.305718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:22.982 [2024-12-05 20:49:16.309197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.982 [2024-12-05 20:49:16.309435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.982 [2024-12-05 20:49:16.309452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.982 [2024-12-05 20:49:16.312927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.982 [2024-12-05 20:49:16.313158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.982 [2024-12-05 20:49:16.313175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.982 [2024-12-05 20:49:16.316661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.982 [2024-12-05 20:49:16.316876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.982 [2024-12-05 20:49:16.316894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.982 [2024-12-05 20:49:16.320417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.982 [2024-12-05 20:49:16.320624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.982 [2024-12-05 20:49:16.320646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:22.982 [2024-12-05 20:49:16.324066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.982 [2024-12-05 20:49:16.324265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.982 [2024-12-05 20:49:16.324287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.982 [2024-12-05 20:49:16.327677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.982 [2024-12-05 20:49:16.327881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.982 [2024-12-05 20:49:16.327898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.982 [2024-12-05 20:49:16.331336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.982 [2024-12-05 20:49:16.331548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.982 [2024-12-05 20:49:16.331565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.982 [2024-12-05 20:49:16.335034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.982 [2024-12-05 20:49:16.335260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.982 [2024-12-05 20:49:16.335278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:22.982 [2024-12-05 20:49:16.338785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.982 [2024-12-05 20:49:16.338992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.982 [2024-12-05 20:49:16.339010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.982 [2024-12-05 20:49:16.342530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.982 [2024-12-05 20:49:16.342748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.982 [2024-12-05 20:49:16.342766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.982 [2024-12-05 20:49:16.346299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.982 [2024-12-05 20:49:16.346537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.982 [2024-12-05 20:49:16.346555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.982 [2024-12-05 20:49:16.350101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.982 [2024-12-05 20:49:16.350323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.982 [2024-12-05 20:49:16.350341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:22.982 [2024-12-05 20:49:16.353816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.982 [2024-12-05 20:49:16.354025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.982 [2024-12-05 20:49:16.354042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.982 [2024-12-05 20:49:16.357525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.983 [2024-12-05 20:49:16.357723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.983 [2024-12-05 20:49:16.357740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.983 [2024-12-05 20:49:16.361220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.983 [2024-12-05 20:49:16.361433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.983 [2024-12-05 20:49:16.361450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.983 [2024-12-05 20:49:16.364900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.983 [2024-12-05 20:49:16.365116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.983 [2024-12-05 20:49:16.365133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:22.983 [2024-12-05 20:49:16.368531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.983 [2024-12-05 20:49:16.368745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.983 [2024-12-05 20:49:16.368762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.983 [2024-12-05 20:49:16.372213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.983 [2024-12-05 20:49:16.372427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.983 [2024-12-05 20:49:16.372445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.983 [2024-12-05 20:49:16.375887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.983 [2024-12-05 20:49:16.376107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.983 [2024-12-05 20:49:16.376125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.983 [2024-12-05 20:49:16.379543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.983 [2024-12-05 20:49:16.379744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.983 [2024-12-05 20:49:16.379760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:22.983 [2024-12-05 20:49:16.383163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.983 [2024-12-05 20:49:16.383361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.983 [2024-12-05 20:49:16.383381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.983 [2024-12-05 20:49:16.386949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.983 [2024-12-05 20:49:16.387115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.983 [2024-12-05 20:49:16.387131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.983 [2024-12-05 20:49:16.390885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.983 [2024-12-05 20:49:16.391094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.983 [2024-12-05 20:49:16.391111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.983 [2024-12-05 20:49:16.395095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.983 [2024-12-05 20:49:16.395280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.983 [2024-12-05 20:49:16.395296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:22.983 [2024-12-05 20:49:16.399197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.983 [2024-12-05 20:49:16.399388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.983 [2024-12-05 20:49:16.399405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.983 [2024-12-05 20:49:16.403466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.983 [2024-12-05 20:49:16.403674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.983 [2024-12-05 20:49:16.403691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:22.983 [2024-12-05 20:49:16.407814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.983 [2024-12-05 20:49:16.408001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.983 [2024-12-05 20:49:16.408017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:22.983 [2024-12-05 20:49:16.411645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.983 [2024-12-05 20:49:16.411838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.983 [2024-12-05 20:49:16.411854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:22.983 [2024-12-05 20:49:16.415430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:22.983 [2024-12-05 20:49:16.415612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.983 [2024-12-05 20:49:16.415628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.243 [2024-12-05 20:49:16.419313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.243 [2024-12-05 20:49:16.419504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.243 [2024-12-05 20:49:16.419521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.243 [2024-12-05 20:49:16.423361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.243 [2024-12-05 20:49:16.423565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.243 [2024-12-05 20:49:16.423582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.243 [2024-12-05 20:49:16.427217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.243 [2024-12-05 20:49:16.427397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.243 [2024-12-05 20:49:16.427414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.243 [2024-12-05 20:49:16.430981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.243 [2024-12-05 20:49:16.431171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.243 [2024-12-05 20:49:16.431188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.243 [2024-12-05 20:49:16.434542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.243 [2024-12-05 20:49:16.434727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.243 [2024-12-05 20:49:16.434744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.243 [2024-12-05 20:49:16.439073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.243 [2024-12-05 20:49:16.439222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.243 [2024-12-05 20:49:16.439240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.243 [2024-12-05 20:49:16.444294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.243 [2024-12-05 20:49:16.444584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.243 [2024-12-05 20:49:16.444602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.243 [2024-12-05 20:49:16.450500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.243 [2024-12-05 20:49:16.450704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.243 [2024-12-05 20:49:16.450723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.243 [2024-12-05 20:49:16.456482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.243 [2024-12-05 20:49:16.456911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.243 [2024-12-05 20:49:16.456929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.243 [2024-12-05 20:49:16.463114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.243 [2024-12-05 20:49:16.463377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.243 [2024-12-05 20:49:16.463395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.243 [2024-12-05 20:49:16.469285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.243 [2024-12-05 20:49:16.469428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.243 [2024-12-05 20:49:16.469444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.243 [2024-12-05 20:49:16.476154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.243 [2024-12-05 20:49:16.476466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.243 [2024-12-05 20:49:16.476484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.243 [2024-12-05 20:49:16.482701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.243 [2024-12-05 20:49:16.482900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.243 [2024-12-05 20:49:16.482919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.243 [2024-12-05 20:49:16.489363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.243 [2024-12-05 20:49:16.489548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.243 [2024-12-05 20:49:16.489566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.243 [2024-12-05 20:49:16.495815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.243 [2024-12-05 20:49:16.496144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.243 [2024-12-05 20:49:16.496162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.243 [2024-12-05 20:49:16.502183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.243 [2024-12-05 20:49:16.502408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.243 [2024-12-05 20:49:16.502425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.243 [2024-12-05 20:49:16.508630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.243 [2024-12-05 20:49:16.508938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.243 [2024-12-05 20:49:16.508956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.243 [2024-12-05 20:49:16.514412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.243 [2024-12-05 20:49:16.514671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.243 [2024-12-05 20:49:16.514693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.243 [2024-12-05 20:49:16.520834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.243 [2024-12-05 20:49:16.521126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.243 [2024-12-05 20:49:16.521145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.243 [2024-12-05 20:49:16.527414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.244 [2024-12-05 20:49:16.527674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.244 [2024-12-05 20:49:16.527692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.244 [2024-12-05 20:49:16.533509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.244 [2024-12-05 20:49:16.533798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.244 [2024-12-05 20:49:16.533817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.244 [2024-12-05 20:49:16.540385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.244 [2024-12-05 20:49:16.540708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.244 [2024-12-05 20:49:16.540727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.244 [2024-12-05 20:49:16.546883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.244 [2024-12-05 20:49:16.547099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.244 [2024-12-05 20:49:16.547118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.244 [2024-12-05 20:49:16.551786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.244 [2024-12-05 20:49:16.551987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.244 [2024-12-05 20:49:16.552005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.244 [2024-12-05 20:49:16.556962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.244 [2024-12-05 20:49:16.557168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.244 [2024-12-05 20:49:16.557186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.244 [2024-12-05 20:49:16.561003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.244 [2024-12-05 20:49:16.561205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.244 [2024-12-05 20:49:16.561229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.244 [2024-12-05 20:49:16.565469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.244 [2024-12-05 20:49:16.565670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.244 [2024-12-05 20:49:16.565687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.244 [2024-12-05 20:49:16.569657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.244 [2024-12-05 20:49:16.569853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.244 [2024-12-05 20:49:16.569871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.244 [2024-12-05 20:49:16.573567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.244 [2024-12-05 20:49:16.573764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.244 [2024-12-05 20:49:16.573781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.244 [2024-12-05 20:49:16.577373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.244 [2024-12-05 20:49:16.577571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.244 [2024-12-05 20:49:16.577590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.244 [2024-12-05 20:49:16.581128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.244 [2024-12-05 20:49:16.581332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.244 [2024-12-05 20:49:16.581351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.244 [2024-12-05 20:49:16.584856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.244 [2024-12-05 20:49:16.585028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.244 [2024-12-05 20:49:16.585045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.244 [2024-12-05 20:49:16.588506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.244 [2024-12-05 20:49:16.588680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.244 [2024-12-05 20:49:16.588697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.244 [2024-12-05 20:49:16.592104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.244 [2024-12-05 20:49:16.592274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.244 [2024-12-05 20:49:16.592292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.244 [2024-12-05 20:49:16.595682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.244 [2024-12-05 20:49:16.595853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.244 [2024-12-05 20:49:16.595870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.244 [2024-12-05 20:49:16.599306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.244 [2024-12-05 20:49:16.599478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.244 [2024-12-05 20:49:16.599495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.244 [2024-12-05 20:49:16.602837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.244 [2024-12-05 20:49:16.603022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.244 [2024-12-05 20:49:16.603039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.244 [2024-12-05 20:49:16.606401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.244 [2024-12-05 20:49:16.606567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.244 [2024-12-05 20:49:16.606584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.244 [2024-12-05 20:49:16.609938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.244 [2024-12-05 20:49:16.610129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.244 [2024-12-05 20:49:16.610147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.244 [2024-12-05 20:49:16.613528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.244 [2024-12-05 20:49:16.613711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.244 [2024-12-05 20:49:16.613728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.244 [2024-12-05 20:49:16.617083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.244 [2024-12-05 20:49:16.617268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.244 [2024-12-05 20:49:16.617286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.244 [2024-12-05 20:49:16.620648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.244 [2024-12-05 20:49:16.620817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.244 [2024-12-05 20:49:16.620835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.244 [2024-12-05 20:49:16.624188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.244 [2024-12-05 20:49:16.624382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.244 [2024-12-05 20:49:16.624399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.244 [2024-12-05 20:49:16.627742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.244 [2024-12-05 20:49:16.627926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.245 [2024-12-05 20:49:16.627948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.245 [2024-12-05 20:49:16.631419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.245 [2024-12-05 20:49:16.631601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.245 [2024-12-05 20:49:16.631618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.245 [2024-12-05 20:49:16.635481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.245 [2024-12-05 20:49:16.635666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.245 [2024-12-05 20:49:16.635683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.245 [2024-12-05 20:49:16.640728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.245 [2024-12-05 20:49:16.640901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.245 [2024-12-05 20:49:16.640919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.245 [2024-12-05 20:49:16.644612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.245 [2024-12-05 20:49:16.644780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.245 [2024-12-05 20:49:16.644797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.245 [2024-12-05 20:49:16.648510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.245 [2024-12-05 20:49:16.648693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.245 [2024-12-05 20:49:16.648710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.245 [2024-12-05 20:49:16.652357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.245 [2024-12-05 20:49:16.652525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.245 [2024-12-05 20:49:16.652542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.245 [2024-12-05 20:49:16.656167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.245 [2024-12-05 20:49:16.656362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.245 [2024-12-05 20:49:16.656380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.245 [2024-12-05 20:49:16.659944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.245 [2024-12-05 20:49:16.660140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.245 [2024-12-05 20:49:16.660157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.245 [2024-12-05 20:49:16.663882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.245 [2024-12-05 20:49:16.664054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.245 [2024-12-05 20:49:16.664082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.245 [2024-12-05 20:49:16.667681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.245 [2024-12-05 20:49:16.667865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.245 [2024-12-05 20:49:16.667883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.245 [2024-12-05 20:49:16.671453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.245 [2024-12-05 20:49:16.671631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.245 [2024-12-05 20:49:16.671649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.245 [2024-12-05 20:49:16.675295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.245 [2024-12-05 20:49:16.675467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.245 [2024-12-05 20:49:16.675485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.245 [2024-12-05 20:49:16.679109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.245 [2024-12-05 20:49:16.679279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.245 [2024-12-05 20:49:16.679297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.504 [2024-12-05 20:49:16.682979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.504 [2024-12-05 20:49:16.683161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.505 [2024-12-05 20:49:16.683179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.505 [2024-12-05 20:49:16.686840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.505 [2024-12-05 20:49:16.687005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.505 [2024-12-05 20:49:16.687022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.505 [2024-12-05 20:49:16.690633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.505 [2024-12-05 20:49:16.690795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.505 [2024-12-05 20:49:16.690813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.505 [2024-12-05 20:49:16.694431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.505 [2024-12-05 20:49:16.694593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.505 [2024-12-05 20:49:16.694610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.505 [2024-12-05 20:49:16.698163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.505 [2024-12-05 20:49:16.698345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.505 [2024-12-05 20:49:16.698362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.505 [2024-12-05 20:49:16.702251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.505 [2024-12-05 20:49:16.702419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.505 [2024-12-05 20:49:16.702436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.505 [2024-12-05 20:49:16.706002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.505 [2024-12-05 20:49:16.706191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.505 [2024-12-05 20:49:16.706208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.505 [2024-12-05 20:49:16.709832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.505 [2024-12-05 20:49:16.710000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.505 [2024-12-05 20:49:16.710018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.505 [2024-12-05 20:49:16.713599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.505 [2024-12-05 20:49:16.713747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.505 [2024-12-05 20:49:16.713764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.505 [2024-12-05 20:49:16.717441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.505 [2024-12-05 20:49:16.717604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.505 [2024-12-05 20:49:16.717622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.505 [2024-12-05 20:49:16.721303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.505 [2024-12-05 20:49:16.721461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.505 [2024-12-05 20:49:16.721478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.505 [2024-12-05 20:49:16.725264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.505 [2024-12-05 20:49:16.725430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.505 [2024-12-05 20:49:16.725448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.505 [2024-12-05 20:49:16.728873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.505 [2024-12-05 20:49:16.729035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.505 [2024-12-05 20:49:16.729053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.505 [2024-12-05 20:49:16.732465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.505 [2024-12-05 20:49:16.732626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.505 [2024-12-05 20:49:16.732644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.505 [2024-12-05 20:49:16.736035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.505 [2024-12-05 20:49:16.736201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.505 [2024-12-05 20:49:16.736219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.505 [2024-12-05 20:49:16.739587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.505 [2024-12-05 20:49:16.739750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.505 [2024-12-05 20:49:16.739768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.505 [2024-12-05 20:49:16.743361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.505 [2024-12-05 20:49:16.743518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.505 [2024-12-05 20:49:16.743536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.505 [2024-12-05 20:49:16.747405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.505 [2024-12-05 20:49:16.747573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.505 [2024-12-05 20:49:16.747590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.505 [2024-12-05 20:49:16.751788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.505 [2024-12-05 20:49:16.751942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.505 [2024-12-05 20:49:16.751960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.505 [2024-12-05 20:49:16.755845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.505 [2024-12-05 20:49:16.756001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.505 [2024-12-05 20:49:16.756020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.505 [2024-12-05 20:49:16.760475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.505 [2024-12-05 20:49:16.760635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.505 [2024-12-05 20:49:16.760653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.505 [2024-12-05 20:49:16.764828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.505 [2024-12-05 20:49:16.764990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.505 [2024-12-05 20:49:16.765011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.505 [2024-12-05 20:49:16.769798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.505 [2024-12-05 20:49:16.769971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.505 [2024-12-05 20:49:16.769988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.505 [2024-12-05 20:49:16.774143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.505 [2024-12-05 20:49:16.774306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.505 [2024-12-05 20:49:16.774323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.505 [2024-12-05 20:49:16.777877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.505 [2024-12-05 20:49:16.778044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.505 [2024-12-05 20:49:16.778067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.505 [2024-12-05 20:49:16.781574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.506 [2024-12-05 20:49:16.781733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.506 [2024-12-05 20:49:16.781750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.506 [2024-12-05 20:49:16.785150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.506 [2024-12-05 20:49:16.785309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.506 [2024-12-05 20:49:16.785326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.506 [2024-12-05 20:49:16.788723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.506 [2024-12-05 20:49:16.788882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.506 [2024-12-05 20:49:16.788899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.506 [2024-12-05 20:49:16.792285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.506 [2024-12-05 20:49:16.792444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.506 [2024-12-05 20:49:16.792461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.506 [2024-12-05 20:49:16.795792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.506 [2024-12-05 20:49:16.795954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.506 [2024-12-05 20:49:16.795971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.506 [2024-12-05 20:49:16.799390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.506 [2024-12-05 20:49:16.799558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.506 [2024-12-05 20:49:16.799575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.506 [2024-12-05 20:49:16.802920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.506 [2024-12-05 20:49:16.803103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.506 [2024-12-05 20:49:16.803121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.506 [2024-12-05 20:49:16.806573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.506 [2024-12-05 20:49:16.806747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.506 [2024-12-05 20:49:16.806764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.506 [2024-12-05 20:49:16.810411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.506 [2024-12-05 20:49:16.810571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.506 [2024-12-05 20:49:16.810589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.506 [2024-12-05 20:49:16.814875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.506 [2024-12-05 20:49:16.815031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.506 [2024-12-05 20:49:16.815049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.506 [2024-12-05 20:49:16.819096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.506 [2024-12-05 20:49:16.819254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.506 [2024-12-05 20:49:16.819272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.506 [2024-12-05 20:49:16.822808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.506 [2024-12-05 20:49:16.822965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.506 [2024-12-05 20:49:16.822983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.506 [2024-12-05 20:49:16.826563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.506 [2024-12-05 20:49:16.826722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.506 [2024-12-05 20:49:16.826739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.506 [2024-12-05 20:49:16.830460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.506 [2024-12-05 20:49:16.830611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.506 [2024-12-05 20:49:16.830628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.506 [2024-12-05 20:49:16.834330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.506 [2024-12-05 20:49:16.834488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.506 [2024-12-05 20:49:16.834506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.506 [2024-12-05 20:49:16.838064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.506 [2024-12-05 20:49:16.838237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.506 [2024-12-05 20:49:16.838255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.506 [2024-12-05 20:49:16.841907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.506 [2024-12-05 20:49:16.842075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.506 [2024-12-05 20:49:16.842092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.506 [2024-12-05 20:49:16.845668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.506 [2024-12-05 20:49:16.845842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.506 [2024-12-05 20:49:16.845860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.506 [2024-12-05 20:49:16.849534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.506 [2024-12-05 20:49:16.849711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.506 [2024-12-05 20:49:16.849729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.506 [2024-12-05 20:49:16.853820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.506 [2024-12-05 20:49:16.853988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.506 [2024-12-05 20:49:16.854005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.506 [2024-12-05 20:49:16.858105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.506 [2024-12-05 20:49:16.858282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.506 [2024-12-05 20:49:16.858300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.506 [2024-12-05 20:49:16.861966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.506 [2024-12-05 20:49:16.862141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.506 [2024-12-05 20:49:16.862159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.506 [2024-12-05 20:49:16.866352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.506 [2024-12-05 20:49:16.866499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.506 [2024-12-05 20:49:16.866520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.506 [2024-12-05 20:49:16.871328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.506 [2024-12-05 20:49:16.871493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.506 [2024-12-05 20:49:16.871510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.506 [2024-12-05 20:49:16.875153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.506 [2024-12-05 20:49:16.875307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.506 [2024-12-05 20:49:16.875324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.506 [2024-12-05 20:49:16.879122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.506 [2024-12-05 20:49:16.879286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.506 [2024-12-05 20:49:16.879303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.507 [2024-12-05 20:49:16.882847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.507 [2024-12-05 20:49:16.883018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.507 [2024-12-05 20:49:16.883036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.507 [2024-12-05 20:49:16.886653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.507 [2024-12-05 20:49:16.886827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.507 [2024-12-05 20:49:16.886844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.507 [2024-12-05 20:49:16.890545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.507 [2024-12-05 20:49:16.890714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.507 [2024-12-05 20:49:16.890732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.507 [2024-12-05 20:49:16.894232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.507 [2024-12-05 20:49:16.894403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.507 [2024-12-05 20:49:16.894421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.507 [2024-12-05 20:49:16.898100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.507 [2024-12-05 20:49:16.898294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.507 [2024-12-05 20:49:16.898312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.507 [2024-12-05 20:49:16.901978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.507 [2024-12-05 20:49:16.902156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.507 [2024-12-05 20:49:16.902174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.507 [2024-12-05 20:49:16.905856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.507 [2024-12-05 20:49:16.906007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.507 [2024-12-05 20:49:16.906024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.507 [2024-12-05 20:49:16.909736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.507 [2024-12-05 20:49:16.909888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.507 [2024-12-05 20:49:16.909905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.507 [2024-12-05 20:49:16.913517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.507 [2024-12-05 20:49:16.913677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.507 [2024-12-05 20:49:16.913694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.507 [2024-12-05 20:49:16.917384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.507 [2024-12-05 20:49:16.917543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.507 [2024-12-05 20:49:16.917561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.507 [2024-12-05 20:49:16.921230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.507 [2024-12-05 20:49:16.921388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.507 [2024-12-05 20:49:16.921405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.507 [2024-12-05 20:49:16.925087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.507 [2024-12-05 20:49:16.925242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.507 [2024-12-05 20:49:16.925259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.507 [2024-12-05 20:49:16.928939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.507 [2024-12-05 20:49:16.929099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.507 [2024-12-05 20:49:16.929116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.507 [2024-12-05 20:49:16.932841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.507 [2024-12-05 20:49:16.932993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.507 [2024-12-05 20:49:16.933010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.507 [2024-12-05 20:49:16.936706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.507 [2024-12-05 20:49:16.936866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.507 [2024-12-05 20:49:16.936883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.507 [2024-12-05 20:49:16.940773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.507 [2024-12-05 20:49:16.940927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.507 [2024-12-05 20:49:16.940944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.767 [2024-12-05 20:49:16.944658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.767 [2024-12-05 20:49:16.944817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.767 [2024-12-05 20:49:16.944835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.767 [2024-12-05 20:49:16.948588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.767 [2024-12-05 20:49:16.948751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.767 [2024-12-05 20:49:16.948768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.767 [2024-12-05 20:49:16.952500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.767 [2024-12-05 20:49:16.952655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.767 [2024-12-05 20:49:16.952672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.767 [2024-12-05 20:49:16.956403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.767 [2024-12-05 20:49:16.956569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.767 [2024-12-05 20:49:16.956585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.767 [2024-12-05 20:49:16.960298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.767 [2024-12-05 20:49:16.960453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.767 [2024-12-05 20:49:16.960470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.767 [2024-12-05 20:49:16.964158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.767 [2024-12-05 20:49:16.964317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.767 [2024-12-05 20:49:16.964335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.767 [2024-12-05 20:49:16.967975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.767 [2024-12-05 20:49:16.968153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.767 [2024-12-05 20:49:16.968176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.767 [2024-12-05 20:49:16.971877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.767 [2024-12-05 20:49:16.972047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.767 [2024-12-05 20:49:16.972071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.767 [2024-12-05 20:49:16.975961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.767 [2024-12-05 20:49:16.976152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.767 [2024-12-05 20:49:16.976170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.767 [2024-12-05 20:49:16.980845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.767 [2024-12-05 20:49:16.981138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.767 [2024-12-05 20:49:16.981156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.767 [2024-12-05 20:49:16.986225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.767 [2024-12-05 20:49:16.986465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.767 [2024-12-05 20:49:16.986483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.767 7292.00 IOPS, 911.50 MiB/s [2024-12-05T19:49:17.208Z] [2024-12-05 20:49:16.992309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.768 [2024-12-05 20:49:16.992590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.768 [2024-12-05 20:49:16.992609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.768 [2024-12-05 20:49:16.997896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.768 [2024-12-05 20:49:16.998092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.768 [2024-12-05 20:49:16.998110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.768 [2024-12-05 20:49:17.002085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.768 [2024-12-05 20:49:17.002252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.768 [2024-12-05 20:49:17.002271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.768 [2024-12-05 20:49:17.006241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.768 [2024-12-05 20:49:17.006444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.768 [2024-12-05 20:49:17.006464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.768 [2024-12-05 20:49:17.010324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.768 [2024-12-05 20:49:17.010527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.768 [2024-12-05 20:49:17.010546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.768 [2024-12-05 20:49:17.014473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.768 [2024-12-05 20:49:17.014652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.768 [2024-12-05 20:49:17.014670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.768 [2024-12-05 20:49:17.018590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.768 [2024-12-05 20:49:17.018790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.768 [2024-12-05 20:49:17.018815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.768 [2024-12-05 20:49:17.022795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.768 [2024-12-05 20:49:17.022996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.768 [2024-12-05 20:49:17.023020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.768 [2024-12-05 20:49:17.026914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.768 [2024-12-05 20:49:17.027080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.768 [2024-12-05 20:49:17.027098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.768 [2024-12-05 20:49:17.030852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.768 [2024-12-05 20:49:17.031033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.768 [2024-12-05 20:49:17.031051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.768 [2024-12-05 20:49:17.034781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.768 [2024-12-05 20:49:17.034974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.768 [2024-12-05 20:49:17.034992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.768 [2024-12-05 20:49:17.038688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.768 [2024-12-05 20:49:17.038863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.768 [2024-12-05 20:49:17.038880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.768 [2024-12-05 20:49:17.042486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.768 [2024-12-05 20:49:17.042653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.768 [2024-12-05 20:49:17.042671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.768 [2024-12-05 20:49:17.046256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.768 [2024-12-05 20:49:17.046433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.768 [2024-12-05 20:49:17.046450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.768 [2024-12-05 20:49:17.050116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.768 [2024-12-05 20:49:17.050321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.768 [2024-12-05 20:49:17.050346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.768 [2024-12-05 20:49:17.053716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.768 [2024-12-05 20:49:17.053916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.768 [2024-12-05 20:49:17.053934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.768 [2024-12-05 20:49:17.057536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.768 [2024-12-05 20:49:17.057759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.768 [2024-12-05 20:49:17.057778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.768 [2024-12-05 20:49:17.062512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.768 [2024-12-05 20:49:17.062689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.768 [2024-12-05 20:49:17.062707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.768 [2024-12-05 20:49:17.067132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.768 [2024-12-05 20:49:17.067344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.768 [2024-12-05 20:49:17.067363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.768 [2024-12-05 20:49:17.071116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.768 [2024-12-05 20:49:17.071294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.768 [2024-12-05 20:49:17.071311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.768 [2024-12-05 20:49:17.075007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.768 [2024-12-05 20:49:17.075203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.768 [2024-12-05 20:49:17.075220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.768 [2024-12-05 20:49:17.079233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.768 [2024-12-05 20:49:17.079415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.768 [2024-12-05 20:49:17.079435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.768 [2024-12-05 20:49:17.084801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.768 [2024-12-05 20:49:17.084944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.768 [2024-12-05 20:49:17.084961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.768 [2024-12-05 20:49:17.088976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.768 [2024-12-05 20:49:17.089160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.768 [2024-12-05 20:49:17.089178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.768 [2024-12-05 20:49:17.092850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.768 [2024-12-05 20:49:17.093018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.768 [2024-12-05 20:49:17.093037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.768 [2024-12-05 20:49:17.096606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.768 [2024-12-05 20:49:17.096776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.768 [2024-12-05 20:49:17.096794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.769 [2024-12-05 20:49:17.100284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.769 [2024-12-05 20:49:17.100459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.769 [2024-12-05 20:49:17.100478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.769 [2024-12-05 20:49:17.103956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.769 [2024-12-05 20:49:17.104145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.769 [2024-12-05 20:49:17.104162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.769 [2024-12-05 20:49:17.107861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.769 [2024-12-05 20:49:17.108030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.769 [2024-12-05 20:49:17.108047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.769 [2024-12-05 20:49:17.112240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.769 [2024-12-05 20:49:17.112430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.769 [2024-12-05 20:49:17.112448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.769 [2024-12-05 20:49:17.116381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.769 [2024-12-05 20:49:17.116569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.769 [2024-12-05 20:49:17.116587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.769 [2024-12-05 20:49:17.120171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.769 [2024-12-05 20:49:17.120351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.769 [2024-12-05 20:49:17.120368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.769 [2024-12-05 20:49:17.124055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.769 [2024-12-05 20:49:17.124245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.769 [2024-12-05 20:49:17.124262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.769 [2024-12-05 20:49:17.127925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.769 [2024-12-05 20:49:17.128116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.769 [2024-12-05 20:49:17.128133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.769 [2024-12-05 20:49:17.131556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.769 [2024-12-05 20:49:17.131727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.769 [2024-12-05 20:49:17.131744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.769 [2024-12-05 20:49:17.135295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.769 [2024-12-05 20:49:17.135463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.769 [2024-12-05 20:49:17.135480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.769 [2024-12-05 20:49:17.139632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.769 [2024-12-05 20:49:17.139796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.769 [2024-12-05 20:49:17.139814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.769 [2024-12-05 20:49:17.144291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.769 [2024-12-05 20:49:17.144456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.769 [2024-12-05 20:49:17.144474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.769 [2024-12-05 20:49:17.148333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.769 [2024-12-05 20:49:17.148504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.769 [2024-12-05 20:49:17.148522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.769 [2024-12-05 20:49:17.152134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.769 [2024-12-05 20:49:17.152303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.769 [2024-12-05 20:49:17.152321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.769 [2024-12-05 20:49:17.156009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.769 [2024-12-05 20:49:17.156185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.769 [2024-12-05 20:49:17.156202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.769 [2024-12-05 20:49:17.159750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.769 [2024-12-05 20:49:17.159918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.769 [2024-12-05 20:49:17.159935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.769 [2024-12-05 20:49:17.163535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.769 [2024-12-05 20:49:17.163723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.769 [2024-12-05 20:49:17.163741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.769 [2024-12-05 20:49:17.167239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.769 [2024-12-05 20:49:17.167406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.769 [2024-12-05 20:49:17.167423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.769 [2024-12-05 20:49:17.171317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.769 [2024-12-05 20:49:17.171495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.769 [2024-12-05 20:49:17.171512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.769 [2024-12-05 20:49:17.175819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.769 [2024-12-05 20:49:17.176007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.769 [2024-12-05 20:49:17.176025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.769 [2024-12-05 20:49:17.180005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.769 [2024-12-05 20:49:17.180186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.769 [2024-12-05 20:49:17.180204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.769 [2024-12-05 20:49:17.183894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.769 [2024-12-05 20:49:17.184078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.769 [2024-12-05 20:49:17.184099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.769 [2024-12-05 20:49:17.187729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.769 [2024-12-05 20:49:17.187895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.769 [2024-12-05 20:49:17.187913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.769 [2024-12-05 20:49:17.191294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.769 [2024-12-05 20:49:17.191466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.769 [2024-12-05 20:49:17.191484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:23.769 [2024-12-05 20:49:17.194852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.769 [2024-12-05 20:49:17.195025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.769 [2024-12-05 20:49:17.195043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.769 [2024-12-05 20:49:17.198313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.769 [2024-12-05 20:49:17.198486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.770 [2024-12-05 20:49:17.198503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.770 [2024-12-05 20:49:17.201870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.770 [2024-12-05 20:49:17.202041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.770 [2024-12-05 20:49:17.202065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:23.770 [2024-12-05 20:49:17.205470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:23.770 [2024-12-05 20:49:17.205641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.770 [2024-12-05 20:49:17.205658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.031 [2024-12-05 20:49:17.209227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.031 [2024-12-05 20:49:17.209418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.031 [2024-12-05 20:49:17.209436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.031 [2024-12-05 20:49:17.213958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.031 [2024-12-05 20:49:17.214152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.031 [2024-12-05 20:49:17.214170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.031 [2024-12-05 20:49:17.218235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.031 [2024-12-05 20:49:17.218408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.031 [2024-12-05 20:49:17.218426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.031 [2024-12-05 20:49:17.222125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.031 [2024-12-05 20:49:17.222291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.031 [2024-12-05 20:49:17.222309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.031 [2024-12-05 20:49:17.225873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.031 [2024-12-05 20:49:17.226043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.031 [2024-12-05 20:49:17.226067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.031 [2024-12-05 20:49:17.229612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.031 [2024-12-05 20:49:17.229779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.031 [2024-12-05 20:49:17.229796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.031 [2024-12-05 20:49:17.233300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.031 [2024-12-05 20:49:17.233472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.031 [2024-12-05 20:49:17.233490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.031 [2024-12-05 20:49:17.237048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.031 [2024-12-05 20:49:17.237226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.031 [2024-12-05 20:49:17.237243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.031 [2024-12-05 20:49:17.240922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.031 [2024-12-05 20:49:17.241090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.031 [2024-12-05 20:49:17.241107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.031 [2024-12-05 20:49:17.244977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.031 [2024-12-05 20:49:17.245155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.031 [2024-12-05 20:49:17.245173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.031 [2024-12-05 20:49:17.248886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.031 [2024-12-05 20:49:17.249076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.031 [2024-12-05 20:49:17.249093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.031 [2024-12-05 20:49:17.252754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.031 [2024-12-05 20:49:17.252924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.031 [2024-12-05 20:49:17.252941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.031 [2024-12-05 20:49:17.256381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.031 [2024-12-05 20:49:17.256552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.031 [2024-12-05 20:49:17.256570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.031 [2024-12-05 20:49:17.260310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.031 [2024-12-05 20:49:17.260485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.031 [2024-12-05 20:49:17.260503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.031 [2024-12-05 20:49:17.264535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.031 [2024-12-05 20:49:17.264701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.031 [2024-12-05 20:49:17.264718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.031 [2024-12-05 20:49:17.268938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.031 [2024-12-05 20:49:17.269116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.031 [2024-12-05 20:49:17.269134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.031 [2024-12-05 20:49:17.272823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.031 [2024-12-05 20:49:17.272979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.031 [2024-12-05 20:49:17.272996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.031 [2024-12-05 20:49:17.276541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.031 [2024-12-05 20:49:17.276696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.031 [2024-12-05 20:49:17.276713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.031 [2024-12-05 20:49:17.280246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.031 [2024-12-05 20:49:17.280399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.031 [2024-12-05 20:49:17.280416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.031 [2024-12-05 20:49:17.284168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.031 [2024-12-05 20:49:17.284342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.031 [2024-12-05 20:49:17.284364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.031 [2024-12-05 20:49:17.288296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.031 [2024-12-05 20:49:17.288452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.031 [2024-12-05 20:49:17.288470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.031 [2024-12-05 20:49:17.292246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.031 [2024-12-05 20:49:17.292427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.032 [2024-12-05 20:49:17.292445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.032 [2024-12-05 20:49:17.295904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.032 [2024-12-05 20:49:17.296079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.032 [2024-12-05 20:49:17.296097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.032 [2024-12-05 20:49:17.299645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.032 [2024-12-05 20:49:17.299815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.032 [2024-12-05 20:49:17.299832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.032 [2024-12-05 20:49:17.303415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.032 [2024-12-05 20:49:17.303575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.032 [2024-12-05 20:49:17.303592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.032 [2024-12-05 20:49:17.307406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.032 [2024-12-05 20:49:17.307561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.032 [2024-12-05 20:49:17.307578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.032 [2024-12-05 20:49:17.311208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.032 [2024-12-05 20:49:17.311363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.032 [2024-12-05 20:49:17.311380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.032 [2024-12-05 20:49:17.314830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.032 [2024-12-05 20:49:17.314988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.032 [2024-12-05 20:49:17.315005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.032 [2024-12-05 20:49:17.318484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.032 [2024-12-05 20:49:17.318642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.032 [2024-12-05 20:49:17.318659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.032 [2024-12-05 20:49:17.322159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.032 [2024-12-05 20:49:17.322319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.032 [2024-12-05 20:49:17.322336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.032 [2024-12-05 20:49:17.325799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.032 [2024-12-05 20:49:17.325969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.032 [2024-12-05 20:49:17.325987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.032 [2024-12-05 20:49:17.329492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.032 [2024-12-05 20:49:17.329652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.032 [2024-12-05 20:49:17.329670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.032 [2024-12-05 20:49:17.332939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.032 [2024-12-05 20:49:17.333103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.032 [2024-12-05 20:49:17.333121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.032 [2024-12-05 20:49:17.336390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.032 [2024-12-05 20:49:17.336549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.032 [2024-12-05 20:49:17.336567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.032 [2024-12-05 20:49:17.339798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.032 [2024-12-05 20:49:17.339952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.032 [2024-12-05 20:49:17.339970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.032 [2024-12-05 20:49:17.343314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.032 [2024-12-05 20:49:17.343468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.032 [2024-12-05 20:49:17.343486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.032 [2024-12-05 20:49:17.347049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.032 [2024-12-05 20:49:17.347213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.032 [2024-12-05 20:49:17.347231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.032 [2024-12-05 20:49:17.351561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.032 [2024-12-05 20:49:17.351720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.032 [2024-12-05 20:49:17.351738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.032 [2024-12-05 20:49:17.355524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.032 [2024-12-05 20:49:17.355684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.032 [2024-12-05 20:49:17.355701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.032 [2024-12-05 20:49:17.359470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.032 [2024-12-05 20:49:17.359623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.032 [2024-12-05 20:49:17.359641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.032 [2024-12-05 20:49:17.363252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.032 [2024-12-05 20:49:17.363406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.032 [2024-12-05 20:49:17.363424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.032 [2024-12-05 20:49:17.366927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.032 [2024-12-05 20:49:17.367093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.032 [2024-12-05 20:49:17.367111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.032 [2024-12-05 20:49:17.370705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.032 [2024-12-05 20:49:17.370863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.032 [2024-12-05 20:49:17.370881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.032 [2024-12-05 20:49:17.374463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.032 [2024-12-05 20:49:17.374624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.032 [2024-12-05 20:49:17.374641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.032 [2024-12-05 20:49:17.378167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.032 [2024-12-05 20:49:17.378328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.032 [2024-12-05 20:49:17.378345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.032 [2024-12-05 20:49:17.382114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.032 [2024-12-05 20:49:17.382274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.032 [2024-12-05 20:49:17.382294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.032 [2024-12-05 20:49:17.385870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.032 [2024-12-05 20:49:17.386032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.033 [2024-12-05 20:49:17.386049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.033 [2024-12-05 20:49:17.389621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.033 [2024-12-05 20:49:17.389781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.033 [2024-12-05 20:49:17.389799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.033 [2024-12-05 20:49:17.393366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.033 [2024-12-05 20:49:17.393530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.033 [2024-12-05 20:49:17.393548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.033 [2024-12-05 20:49:17.397135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.033 [2024-12-05 20:49:17.397296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.033 [2024-12-05 20:49:17.397313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.033 [2024-12-05 20:49:17.401422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.033 [2024-12-05 20:49:17.401651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.033 [2024-12-05 20:49:17.401670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.033 [2024-12-05 20:49:17.405540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.033 [2024-12-05 20:49:17.405713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.033 [2024-12-05 20:49:17.405730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.033 [2024-12-05 20:49:17.409139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.033 [2024-12-05 20:49:17.409294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.033 [2024-12-05 20:49:17.409311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.033 [2024-12-05 20:49:17.412666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.033 [2024-12-05 20:49:17.412824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.033 [2024-12-05 20:49:17.412841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.033 [2024-12-05 20:49:17.416205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.033 [2024-12-05 20:49:17.416366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.033 [2024-12-05 20:49:17.416386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.033 [2024-12-05 20:49:17.419770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.033 [2024-12-05 20:49:17.419927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.033 [2024-12-05 20:49:17.419945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.033 [2024-12-05 20:49:17.423335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.033 [2024-12-05 20:49:17.423488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.033 [2024-12-05 20:49:17.423505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.033 [2024-12-05 20:49:17.427400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.033 [2024-12-05 20:49:17.427555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.033 [2024-12-05 20:49:17.427573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.033 [2024-12-05 20:49:17.432299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.033 [2024-12-05 20:49:17.432457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.033 [2024-12-05 20:49:17.432474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.033 [2024-12-05 20:49:17.436142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.033 [2024-12-05 20:49:17.436330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.033 [2024-12-05 20:49:17.436347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.033 [2024-12-05 20:49:17.440027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.033 [2024-12-05 20:49:17.440214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.033 [2024-12-05 20:49:17.440232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.033 [2024-12-05 20:49:17.443734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.033 [2024-12-05 20:49:17.443896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.033 [2024-12-05 20:49:17.443913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.033 [2024-12-05 20:49:17.447244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.033 [2024-12-05 20:49:17.447418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.033 [2024-12-05 20:49:17.447435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.033 [2024-12-05 20:49:17.450740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.033 [2024-12-05 20:49:17.450916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.033 [2024-12-05 20:49:17.450934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.033 [2024-12-05 20:49:17.454299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.033 [2024-12-05 20:49:17.454453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.033 [2024-12-05 20:49:17.454471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.033 [2024-12-05 20:49:17.457806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.033 [2024-12-05 20:49:17.457964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.033 [2024-12-05 20:49:17.457982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.033 [2024-12-05 20:49:17.461316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.033 [2024-12-05 20:49:17.461469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.033 [2024-12-05 20:49:17.461486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.033 [2024-12-05 20:49:17.464812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.033 [2024-12-05 20:49:17.464976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.033 [2024-12-05 20:49:17.464992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.033 [2024-12-05 20:49:17.468336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.033 [2024-12-05 20:49:17.468518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.033 [2024-12-05 20:49:17.468536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.294 [2024-12-05 20:49:17.471843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.294 [2024-12-05 20:49:17.472005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.294 [2024-12-05 20:49:17.472022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.294 [2024-12-05 20:49:17.475355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.294 [2024-12-05 20:49:17.475516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.294 [2024-12-05 20:49:17.475533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.294 [2024-12-05 20:49:17.478813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.294 [2024-12-05 20:49:17.478975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.294 [2024-12-05 20:49:17.478992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.294 [2024-12-05 20:49:17.482296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.294 [2024-12-05 20:49:17.482476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.294 [2024-12-05 20:49:17.482494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.294 [2024-12-05 20:49:17.485834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.294 [2024-12-05 20:49:17.485995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.294 [2024-12-05 20:49:17.486012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.294 [2024-12-05 20:49:17.489324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.294 [2024-12-05 20:49:17.489485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.294 [2024-12-05 20:49:17.489502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.294 [2024-12-05 20:49:17.492821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.294 [2024-12-05 20:49:17.492996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.294 [2024-12-05 20:49:17.493014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.294 [2024-12-05 20:49:17.496313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.294 [2024-12-05 20:49:17.496487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.294 [2024-12-05 20:49:17.496505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.294 [2024-12-05 20:49:17.499829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.294 [2024-12-05 20:49:17.500000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.294 [2024-12-05 20:49:17.500018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.294 [2024-12-05 20:49:17.503380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.294 [2024-12-05 20:49:17.503532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.294 [2024-12-05 20:49:17.503550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.294 [2024-12-05 20:49:17.507424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.294 [2024-12-05 20:49:17.507582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.294 [2024-12-05 20:49:17.507599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.294 [2024-12-05 20:49:17.511800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.294 [2024-12-05 20:49:17.511956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.294 [2024-12-05 20:49:17.511976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.294 [2024-12-05 20:49:17.515600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.294 [2024-12-05 20:49:17.515756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.294 [2024-12-05 20:49:17.515774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.294 [2024-12-05 20:49:17.519437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.294 [2024-12-05 20:49:17.519592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.294 [2024-12-05 20:49:17.519609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.294 [2024-12-05 20:49:17.523117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.294 [2024-12-05 20:49:17.523278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.294 [2024-12-05 20:49:17.523295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.294 [2024-12-05 20:49:17.526948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.295 [2024-12-05 20:49:17.527108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.295 [2024-12-05 20:49:17.527126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.295 [2024-12-05 20:49:17.531217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.295 [2024-12-05 20:49:17.531375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.295 [2024-12-05 20:49:17.531393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.295 [2024-12-05 20:49:17.535766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.295 [2024-12-05 20:49:17.535935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.295 [2024-12-05 20:49:17.535953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.295 [2024-12-05 20:49:17.539626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.295 [2024-12-05 20:49:17.539786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.295 [2024-12-05 20:49:17.539803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.295 [2024-12-05 20:49:17.543478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.295 [2024-12-05 20:49:17.543643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.295 [2024-12-05 20:49:17.543661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.295 [2024-12-05 20:49:17.547172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.295 [2024-12-05 20:49:17.547353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.295 [2024-12-05 20:49:17.547371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.295 [2024-12-05 20:49:17.551239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.295 [2024-12-05 20:49:17.551400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.295 [2024-12-05 20:49:17.551417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.295 [2024-12-05 20:49:17.555201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.295 [2024-12-05 20:49:17.555369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.295 [2024-12-05 20:49:17.555387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.295 [2024-12-05 20:49:17.559527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.295 [2024-12-05 20:49:17.559697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.295 [2024-12-05 20:49:17.559715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.295 [2024-12-05 20:49:17.563982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.295 [2024-12-05 20:49:17.564161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.295 [2024-12-05 20:49:17.564179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.295 [2024-12-05 20:49:17.568759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.295 [2024-12-05 20:49:17.568917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.295 [2024-12-05 20:49:17.568934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.295 [2024-12-05 20:49:17.572567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.295 [2024-12-05 20:49:17.572717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.295 [2024-12-05 20:49:17.572734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.295 [2024-12-05 20:49:17.576264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.295 [2024-12-05 20:49:17.576442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.295 [2024-12-05 20:49:17.576459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.295 [2024-12-05 20:49:17.580068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.295 [2024-12-05 20:49:17.580226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.295 [2024-12-05 20:49:17.580243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.295 [2024-12-05 20:49:17.583880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.295 [2024-12-05 20:49:17.584040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.295 [2024-12-05 20:49:17.584063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.295 [2024-12-05 20:49:17.587724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.295 [2024-12-05 20:49:17.587880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.295 [2024-12-05 20:49:17.587897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.295 [2024-12-05 20:49:17.591490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.295 [2024-12-05 20:49:17.591639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.295 [2024-12-05 20:49:17.591657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.295 [2024-12-05 20:49:17.595312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.295 [2024-12-05 20:49:17.595461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.295 [2024-12-05 20:49:17.595479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.295 [2024-12-05 20:49:17.599099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.295 [2024-12-05 20:49:17.599272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.295 [2024-12-05 20:49:17.599300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.295 [2024-12-05 20:49:17.602732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.295 [2024-12-05 20:49:17.602906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.295 [2024-12-05 20:49:17.602923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.295 [2024-12-05 20:49:17.606626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.295 [2024-12-05 20:49:17.606798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.295 [2024-12-05 20:49:17.606816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.295 [2024-12-05 20:49:17.610930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.295 [2024-12-05 20:49:17.611103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.295 [2024-12-05 20:49:17.611120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.295 [2024-12-05 20:49:17.615261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.295 [2024-12-05 20:49:17.615441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.295 [2024-12-05 20:49:17.615461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.295 [2024-12-05 20:49:17.619319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.295 [2024-12-05 20:49:17.619480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.295 [2024-12-05 20:49:17.619498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.295 [2024-12-05 20:49:17.623691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.295 [2024-12-05 20:49:17.623844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.295 [2024-12-05 20:49:17.623861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.295 [2024-12-05 20:49:17.628112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.295 [2024-12-05 20:49:17.628271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.295 [2024-12-05 20:49:17.628288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.296 [2024-12-05 20:49:17.632525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.296 [2024-12-05 20:49:17.632700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.296 [2024-12-05 20:49:17.632717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.296 [2024-12-05 20:49:17.637011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.296 [2024-12-05 20:49:17.637186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.296 [2024-12-05 20:49:17.637204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.296 [2024-12-05 20:49:17.641523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.296 [2024-12-05 20:49:17.641663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.296 [2024-12-05 20:49:17.641680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.296 [2024-12-05 20:49:17.645781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.296 [2024-12-05 20:49:17.645931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.296 [2024-12-05 20:49:17.645948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.296 [2024-12-05 20:49:17.649491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.296 [2024-12-05 20:49:17.649642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.296 [2024-12-05 20:49:17.649659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.296 [2024-12-05 20:49:17.653123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.296 [2024-12-05 20:49:17.653298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.296 [2024-12-05 20:49:17.653326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.296 [2024-12-05 20:49:17.656857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.296 [2024-12-05 20:49:17.657023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.296 [2024-12-05 20:49:17.657040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.296 [2024-12-05 20:49:17.660507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.296 [2024-12-05 20:49:17.660682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.296 [2024-12-05 20:49:17.660700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.296 [2024-12-05 20:49:17.664338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.296 [2024-12-05 20:49:17.664511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.296 [2024-12-05 20:49:17.664528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.296 [2024-12-05 20:49:17.668072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.296 [2024-12-05 20:49:17.668242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.296 [2024-12-05 20:49:17.668259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.296 [2024-12-05 20:49:17.671806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.296 [2024-12-05 20:49:17.671965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.296 [2024-12-05 20:49:17.671982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.296 [2024-12-05 20:49:17.675464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.296 [2024-12-05 20:49:17.675603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.296 [2024-12-05 20:49:17.675620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.296 [2024-12-05 20:49:17.679243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.296 [2024-12-05 20:49:17.679386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.296 [2024-12-05 20:49:17.679403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.296 [2024-12-05 20:49:17.683589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.296 [2024-12-05 20:49:17.683730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.296 [2024-12-05 20:49:17.683746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.296 [2024-12-05 20:49:17.687661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.296 [2024-12-05 20:49:17.687814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.296 [2024-12-05 20:49:17.687830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.296 [2024-12-05 20:49:17.691784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.296 [2024-12-05 20:49:17.691934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.296 [2024-12-05 20:49:17.691951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.296 [2024-12-05 20:49:17.696509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.296 [2024-12-05 20:49:17.696683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.296 [2024-12-05 20:49:17.696701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.296 [2024-12-05 20:49:17.700695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.296 [2024-12-05 20:49:17.700835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.296 [2024-12-05 20:49:17.700852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.296 [2024-12-05 20:49:17.704473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.296 [2024-12-05 20:49:17.704628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.296 [2024-12-05 20:49:17.704645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.296 [2024-12-05 20:49:17.708065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.296 [2024-12-05 20:49:17.708243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.296 [2024-12-05 20:49:17.708260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.296 [2024-12-05 20:49:17.711800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.296 [2024-12-05 20:49:17.711952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.296 [2024-12-05 20:49:17.711969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.296 [2024-12-05 20:49:17.715584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.296 [2024-12-05 20:49:17.715728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.296 [2024-12-05 20:49:17.715745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.296 [2024-12-05 20:49:17.719404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.296 [2024-12-05 20:49:17.719564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.296 [2024-12-05 20:49:17.719587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.296 [2024-12-05 20:49:17.723103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.296 [2024-12-05 20:49:17.723247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.296 [2024-12-05 20:49:17.723265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.296 [2024-12-05 20:49:17.726834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.296 [2024-12-05 20:49:17.726994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.296 [2024-12-05 20:49:17.727011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.296 [2024-12-05 20:49:17.730558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.297 [2024-12-05 20:49:17.730708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.297 [2024-12-05 20:49:17.730725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.557 [2024-12-05 20:49:17.734350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.557 [2024-12-05 20:49:17.734512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.558 [2024-12-05 20:49:17.734530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.558 [2024-12-05 20:49:17.737882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.558 [2024-12-05 20:49:17.738053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.558 [2024-12-05 20:49:17.738076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.558 [2024-12-05 20:49:17.741394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.558 [2024-12-05 20:49:17.741536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.558 [2024-12-05 20:49:17.741554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.558 [2024-12-05 20:49:17.744968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.558 [2024-12-05 20:49:17.745125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.558 [2024-12-05 20:49:17.745143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.558 [2024-12-05 20:49:17.748618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.558 [2024-12-05 20:49:17.748772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.558 [2024-12-05 20:49:17.748789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.558 [2024-12-05 20:49:17.752449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.558 [2024-12-05 20:49:17.752628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.558 [2024-12-05 20:49:17.752645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.558 [2024-12-05 20:49:17.757138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.558 [2024-12-05 20:49:17.757287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.558 [2024-12-05 20:49:17.757304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.558 [2024-12-05 20:49:17.761327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.558 [2024-12-05 20:49:17.761468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.558 [2024-12-05 20:49:17.761486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.558 [2024-12-05 20:49:17.765118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.558 [2024-12-05 20:49:17.765263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.558 [2024-12-05 20:49:17.765280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.558 [2024-12-05 20:49:17.768978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.558 [2024-12-05 20:49:17.769127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.558 [2024-12-05 20:49:17.769144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.558 [2024-12-05 20:49:17.772794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.558 [2024-12-05 20:49:17.772947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.558 [2024-12-05 20:49:17.772964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.558 [2024-12-05 20:49:17.776506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.558 [2024-12-05 20:49:17.776655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.558 [2024-12-05 20:49:17.776673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.558 [2024-12-05 20:49:17.780310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.558 [2024-12-05 20:49:17.780452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.558 [2024-12-05 20:49:17.780469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.558 [2024-12-05 20:49:17.784050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.558 [2024-12-05 20:49:17.784221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.558 [2024-12-05 20:49:17.784238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.558 [2024-12-05 20:49:17.787722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.558 [2024-12-05 20:49:17.787896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.558 [2024-12-05 20:49:17.787913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.558 [2024-12-05 20:49:17.791593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.558 [2024-12-05 20:49:17.791740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.558 [2024-12-05 20:49:17.791758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.558 [2024-12-05 20:49:17.795930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.558 [2024-12-05 20:49:17.796080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.558 [2024-12-05 20:49:17.796097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.558 [2024-12-05 20:49:17.800191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.558 [2024-12-05 20:49:17.800357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.558 [2024-12-05 20:49:17.800374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.558 [2024-12-05 20:49:17.804017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.558 [2024-12-05 20:49:17.804188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.558 [2024-12-05 20:49:17.804205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.558 [2024-12-05 20:49:17.808150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.558 [2024-12-05 20:49:17.808320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.558 [2024-12-05 20:49:17.808338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.558 [2024-12-05 20:49:17.812014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.558 [2024-12-05 20:49:17.812197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.558 [2024-12-05 20:49:17.812215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.558 [2024-12-05 20:49:17.815756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.558 [2024-12-05 20:49:17.815904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.558 [2024-12-05 20:49:17.815921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.558 [2024-12-05 20:49:17.819498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.558 [2024-12-05 20:49:17.819634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.558 [2024-12-05 20:49:17.819655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.558 [2024-12-05 20:49:17.823284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.558 [2024-12-05 20:49:17.823428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.558 [2024-12-05 20:49:17.823445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.558 [2024-12-05 20:49:17.826896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.558 [2024-12-05 20:49:17.827050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.559 [2024-12-05 20:49:17.827073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.559 [2024-12-05 20:49:17.830768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.559 [2024-12-05 20:49:17.830927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.559 [2024-12-05 20:49:17.830944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.559 [2024-12-05 20:49:17.835278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.559 [2024-12-05 20:49:17.835435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.559 [2024-12-05 20:49:17.835452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.559 [2024-12-05 20:49:17.839643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.559 [2024-12-05 20:49:17.839800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.559 [2024-12-05 20:49:17.839817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.559 [2024-12-05 20:49:17.843430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.559 [2024-12-05 20:49:17.843588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.559 [2024-12-05 20:49:17.843605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.559 [2024-12-05 20:49:17.847192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.559 [2024-12-05 20:49:17.847329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.559 [2024-12-05 20:49:17.847346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.559 [2024-12-05 20:49:17.851047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.559 [2024-12-05 20:49:17.851229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.559 [2024-12-05 20:49:17.851246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.559 [2024-12-05 20:49:17.854798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.559 [2024-12-05 20:49:17.854951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.559 [2024-12-05 20:49:17.854968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.559 [2024-12-05 20:49:17.858464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.559 [2024-12-05 20:49:17.858620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.559 [2024-12-05 20:49:17.858638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.559 [2024-12-05 20:49:17.862077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.559 [2024-12-05 20:49:17.862225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.559 [2024-12-05 20:49:17.862242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.559 [2024-12-05 20:49:17.865685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.559 [2024-12-05 20:49:17.865840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.559 [2024-12-05 20:49:17.865857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.559 [2024-12-05 20:49:17.869434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.559 [2024-12-05 20:49:17.869576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.559 [2024-12-05 20:49:17.869593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.559 [2024-12-05 20:49:17.873069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.559 [2024-12-05 20:49:17.873225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.559 [2024-12-05 20:49:17.873242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.559 [2024-12-05 20:49:17.877041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.559 [2024-12-05 20:49:17.877218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.559 [2024-12-05 20:49:17.877236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.559 [2024-12-05 20:49:17.880815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.559 [2024-12-05 20:49:17.880967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.559 [2024-12-05 20:49:17.880984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.559 [2024-12-05 20:49:17.884529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.559 [2024-12-05 20:49:17.884687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.559 [2024-12-05 20:49:17.884704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.559 [2024-12-05 20:49:17.888239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.559 [2024-12-05 20:49:17.888410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.559 [2024-12-05 20:49:17.888427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.559 [2024-12-05 20:49:17.892018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.559 [2024-12-05 20:49:17.892178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.559 [2024-12-05 20:49:17.892196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.559 [2024-12-05 20:49:17.895581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.559 [2024-12-05 20:49:17.895757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.559 [2024-12-05 20:49:17.895774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.559 [2024-12-05 20:49:17.899368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.559 [2024-12-05 20:49:17.899530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.559 [2024-12-05 20:49:17.899547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.559 [2024-12-05 20:49:17.903153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.560 [2024-12-05 20:49:17.903376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.560 [2024-12-05 20:49:17.903394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.560 [2024-12-05 20:49:17.908163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.560 [2024-12-05 20:49:17.908318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.560 [2024-12-05 20:49:17.908335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.560 [2024-12-05 20:49:17.912265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.560 [2024-12-05 20:49:17.912424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.560 [2024-12-05 20:49:17.912441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.560 [2024-12-05 20:49:17.916387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.560 [2024-12-05 20:49:17.916540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.560 [2024-12-05 20:49:17.916557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.560 [2024-12-05 20:49:17.920576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.560 [2024-12-05 20:49:17.920730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.560 [2024-12-05 20:49:17.920750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.560 [2024-12-05 20:49:17.925339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.560 [2024-12-05 20:49:17.925495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.560 [2024-12-05 20:49:17.925512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.560 [2024-12-05 20:49:17.929880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.560 [2024-12-05 20:49:17.930039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.560 [2024-12-05 20:49:17.930055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.560 [2024-12-05 20:49:17.933731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.560 [2024-12-05 20:49:17.933909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.560 [2024-12-05 20:49:17.933926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.560 [2024-12-05 20:49:17.937530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.560 [2024-12-05 20:49:17.937704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.560 [2024-12-05 20:49:17.937721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.560 [2024-12-05 20:49:17.941265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.560 [2024-12-05 20:49:17.941432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.560 [2024-12-05 20:49:17.941449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.560 [2024-12-05 20:49:17.945149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.560 [2024-12-05 20:49:17.945313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.560 [2024-12-05 20:49:17.945330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.560 [2024-12-05 20:49:17.948996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.560 [2024-12-05 20:49:17.949168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.560 [2024-12-05 20:49:17.949186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.560 [2024-12-05 20:49:17.952733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.560 [2024-12-05 20:49:17.952877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.560 [2024-12-05 20:49:17.952894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.560 [2024-12-05 20:49:17.956512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.560 [2024-12-05 20:49:17.956654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.560 [2024-12-05 20:49:17.956675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.560 [2024-12-05 20:49:17.960626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.560 [2024-12-05 20:49:17.960794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.560 [2024-12-05 20:49:17.960811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.560 [2024-12-05 20:49:17.965118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.560 [2024-12-05 20:49:17.965288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.560 [2024-12-05 20:49:17.965305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.560 [2024-12-05 20:49:17.969140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.560 [2024-12-05 20:49:17.969281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.560 [2024-12-05 20:49:17.969298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.560 [2024-12-05 20:49:17.972931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.560 [2024-12-05 20:49:17.973086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.560 [2024-12-05 20:49:17.973103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.560 [2024-12-05 20:49:17.976760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.560 [2024-12-05 20:49:17.976903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.560 [2024-12-05 20:49:17.976920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.560 [2024-12-05 20:49:17.980345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.560 [2024-12-05 20:49:17.980504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.560 [2024-12-05 20:49:17.980520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.560 [2024-12-05 20:49:17.983885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.560 [2024-12-05 20:49:17.984029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.560 [2024-12-05 20:49:17.984046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.560 [2024-12-05 20:49:17.987386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.560 [2024-12-05 20:49:17.987526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.560 [2024-12-05 20:49:17.987543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.560 7621.50 IOPS, 952.69 MiB/s [2024-12-05T19:49:18.001Z] [2024-12-05 20:49:17.991690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x609dd0) with pdu=0x200016eff3c8 00:29:24.560 [2024-12-05 20:49:17.991827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.560 [2024-12-05 20:49:17.991845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.560 00:29:24.560 Latency(us) 00:29:24.560 [2024-12-05T19:49:18.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.560 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:24.560 nvme0n1 : 2.00 7620.07 952.51 0.00 0.00 2096.15 1385.19 9711.24 00:29:24.560 [2024-12-05T19:49:18.001Z] =================================================================================================================== 00:29:24.560 [2024-12-05T19:49:18.001Z] Total : 7620.07 952.51 0.00 0.00 2096.15 1385.19 9711.24 00:29:24.820 { 00:29:24.820 "results": [ 00:29:24.820 { 00:29:24.820 "job": "nvme0n1", 00:29:24.820 "core_mask": "0x2", 00:29:24.820 "workload": "randwrite", 00:29:24.820 "status": "finished", 00:29:24.820 "queue_depth": 16, 00:29:24.820 "io_size": 131072, 00:29:24.820 "runtime": 2.002999, 00:29:24.820 "iops": 7620.073699487618, 00:29:24.820 "mibps": 952.5092124359522, 00:29:24.820 "io_failed": 0, 00:29:24.820 "io_timeout": 0, 00:29:24.820 "avg_latency_us": 2096.1496457863045, 00:29:24.820 "min_latency_us": 1385.1927272727273, 00:29:24.820 "max_latency_us": 9711.243636363637 00:29:24.820 } 00:29:24.820 ], 00:29:24.820 "core_count": 1 00:29:24.820 } 00:29:24.820 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:24.820 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:24.820 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:24.820 | .driver_specific 00:29:24.820 | .nvme_error 00:29:24.820 | .status_code 00:29:24.820 | .command_transient_transport_error' 00:29:24.820 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:24.820 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 493 > 0 )) 00:29:24.820 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 524643 00:29:24.820 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 524643 ']' 00:29:24.820 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 524643 00:29:24.820 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:24.820 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:24.820 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 524643 00:29:25.079 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:25.079 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:25.079 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 524643' 00:29:25.079 killing process with pid 524643 00:29:25.079 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 524643 00:29:25.079 Received shutdown signal, test time was about 2.000000 seconds 00:29:25.079 00:29:25.079 Latency(us) 00:29:25.079 [2024-12-05T19:49:18.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.079 [2024-12-05T19:49:18.520Z] =================================================================================================================== 00:29:25.079 [2024-12-05T19:49:18.520Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:25.079 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 524643 00:29:25.079 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 522749 00:29:25.079 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 522749 ']' 00:29:25.079 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 522749 00:29:25.079 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:25.079 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:25.079 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 522749 00:29:25.079 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:25.079 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:25.079 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 522749' 00:29:25.079 killing process with pid 522749 00:29:25.079 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 522749 00:29:25.079 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 522749 00:29:25.337 00:29:25.337 real 0m14.446s 00:29:25.337 user 0m27.401s 00:29:25.337 sys 0m4.672s 00:29:25.337 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:25.337 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:25.337 ************************************ 00:29:25.337 END TEST nvmf_digest_error 00:29:25.337 ************************************ 00:29:25.337 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:25.337 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:25.337 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:25.337 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:29:25.337 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:25.337 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:29:25.337 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:25.337 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:25.337 rmmod nvme_tcp 00:29:25.337 rmmod nvme_fabrics 00:29:25.337 rmmod nvme_keyring 00:29:25.337 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:25.337 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:29:25.337 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:29:25.337 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 522749 ']' 00:29:25.338 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 522749 00:29:25.338 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 522749 ']' 00:29:25.338 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 522749 00:29:25.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (522749) - No such process 00:29:25.338 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 522749 is not found' 00:29:25.338 Process with pid 522749 is not found 00:29:25.338 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:25.338 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:25.338 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:25.338 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:29:25.338 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:29:25.338 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:25.338 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:29:25.338 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:25.338 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:25.338 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.338 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:25.338 20:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.868 20:49:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:27.868 00:29:27.868 real 0m37.073s 00:29:27.868 user 0m55.735s 00:29:27.868 sys 0m13.826s 00:29:27.868 20:49:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:27.868 20:49:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:27.868 ************************************ 00:29:27.868 END TEST nvmf_digest 00:29:27.868 ************************************ 00:29:27.868 20:49:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:29:27.868 20:49:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:29:27.868 20:49:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:29:27.868 20:49:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:27.868 20:49:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:27.868 20:49:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:27.868 20:49:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.868 ************************************ 00:29:27.868 START TEST nvmf_bdevperf 00:29:27.868 ************************************ 00:29:27.868 20:49:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:27.868 * Looking for test storage... 00:29:27.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:27.868 20:49:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:27.868 20:49:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:29:27.868 20:49:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:27.868 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:27.868 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:27.868 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:27.868 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:27.868 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:29:27.868 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:29:27.868 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:29:27.868 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:29:27.868 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:29:27.868 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:29:27.868 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:29:27.868 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:27.868 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:29:27.868 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:29:27.868 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:27.868 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:27.868 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:29:27.868 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:29:27.868 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:27.868 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:27.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.869 --rc genhtml_branch_coverage=1 00:29:27.869 --rc genhtml_function_coverage=1 00:29:27.869 --rc genhtml_legend=1 00:29:27.869 --rc geninfo_all_blocks=1 00:29:27.869 --rc geninfo_unexecuted_blocks=1 00:29:27.869 00:29:27.869 ' 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:27.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.869 --rc genhtml_branch_coverage=1 00:29:27.869 --rc genhtml_function_coverage=1 00:29:27.869 --rc genhtml_legend=1 00:29:27.869 --rc geninfo_all_blocks=1 00:29:27.869 --rc geninfo_unexecuted_blocks=1 00:29:27.869 00:29:27.869 ' 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:27.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.869 --rc genhtml_branch_coverage=1 00:29:27.869 --rc genhtml_function_coverage=1 00:29:27.869 --rc genhtml_legend=1 00:29:27.869 --rc geninfo_all_blocks=1 00:29:27.869 --rc geninfo_unexecuted_blocks=1 00:29:27.869 00:29:27.869 ' 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:27.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.869 --rc genhtml_branch_coverage=1 00:29:27.869 --rc genhtml_function_coverage=1 00:29:27.869 --rc genhtml_legend=1 00:29:27.869 --rc geninfo_all_blocks=1 00:29:27.869 --rc geninfo_unexecuted_blocks=1 00:29:27.869 00:29:27.869 ' 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:27.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:27.869 20:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:34.442 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:34.442 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:34.442 Found net devices under 0000:af:00.0: cvl_0_0 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.442 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:34.442 Found net devices under 0000:af:00.1: cvl_0_1 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:34.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:34.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:29:34.443 00:29:34.443 --- 10.0.0.2 ping statistics --- 00:29:34.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.443 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:34.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:34.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:29:34.443 00:29:34.443 --- 10.0.0.1 ping statistics --- 00:29:34.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.443 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=528911 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 528911 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 528911 ']' 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:34.443 20:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:34.443 [2024-12-05 20:49:27.013362] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:29:34.443 [2024-12-05 20:49:27.013403] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:34.443 [2024-12-05 20:49:27.088519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:34.443 [2024-12-05 20:49:27.128668] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:34.443 [2024-12-05 20:49:27.128698] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:34.443 [2024-12-05 20:49:27.128705] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:34.443 [2024-12-05 20:49:27.128710] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:34.443 [2024-12-05 20:49:27.128715] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:34.443 [2024-12-05 20:49:27.129924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:34.443 [2024-12-05 20:49:27.130010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.443 [2024-12-05 20:49:27.130012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:34.443 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:34.443 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:29:34.444 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:34.444 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:34.444 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:34.444 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.444 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:34.444 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.444 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:34.444 [2024-12-05 20:49:27.875569] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.444 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.703 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:34.703 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.703 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:34.703 Malloc0 00:29:34.703 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.703 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:34.703 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.703 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:34.703 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.703 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:34.703 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.703 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:34.703 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.703 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:34.703 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.703 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:34.703 [2024-12-05 20:49:27.934397] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.703 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.703 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:34.703 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:34.703 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:34.703 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:34.703 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:34.703 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:34.703 { 00:29:34.703 "params": { 00:29:34.703 "name": "Nvme$subsystem", 00:29:34.703 "trtype": "$TEST_TRANSPORT", 00:29:34.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.703 "adrfam": "ipv4", 00:29:34.703 "trsvcid": "$NVMF_PORT", 00:29:34.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.703 "hdgst": ${hdgst:-false}, 00:29:34.703 "ddgst": ${ddgst:-false} 00:29:34.703 }, 00:29:34.703 "method": "bdev_nvme_attach_controller" 00:29:34.703 } 00:29:34.703 EOF 00:29:34.703 )") 00:29:34.703 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:34.703 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:34.703 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:34.703 20:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:34.703 "params": { 00:29:34.703 "name": "Nvme1", 00:29:34.703 "trtype": "tcp", 00:29:34.703 "traddr": "10.0.0.2", 00:29:34.703 "adrfam": "ipv4", 00:29:34.703 "trsvcid": "4420", 00:29:34.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:34.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:34.703 "hdgst": false, 00:29:34.703 "ddgst": false 00:29:34.703 }, 00:29:34.703 "method": "bdev_nvme_attach_controller" 00:29:34.703 }' 00:29:34.703 [2024-12-05 20:49:27.984053] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:29:34.703 [2024-12-05 20:49:27.984100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid529081 ] 00:29:34.703 [2024-12-05 20:49:28.056677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.703 [2024-12-05 20:49:28.094749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.962 Running I/O for 1 seconds... 00:29:36.158 12254.00 IOPS, 47.87 MiB/s 00:29:36.158 Latency(us) 00:29:36.158 [2024-12-05T19:49:29.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.158 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:36.158 Verification LBA range: start 0x0 length 0x4000 00:29:36.158 Nvme1n1 : 1.05 11814.80 46.15 0.00 0.00 10387.65 1668.19 42657.98 00:29:36.158 [2024-12-05T19:49:29.599Z] =================================================================================================================== 00:29:36.158 [2024-12-05T19:49:29.599Z] Total : 11814.80 46.15 0.00 0.00 10387.65 1668.19 42657.98 00:29:36.158 20:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=529378 00:29:36.158 20:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:36.158 20:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:36.158 20:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:36.158 20:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:36.158 20:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:36.158 20:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:36.158 20:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:36.158 { 00:29:36.158 "params": { 00:29:36.158 "name": "Nvme$subsystem", 00:29:36.158 "trtype": "$TEST_TRANSPORT", 00:29:36.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.158 "adrfam": "ipv4", 00:29:36.158 "trsvcid": "$NVMF_PORT", 00:29:36.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.158 "hdgst": ${hdgst:-false}, 00:29:36.158 "ddgst": ${ddgst:-false} 00:29:36.158 }, 00:29:36.158 "method": "bdev_nvme_attach_controller" 00:29:36.158 } 00:29:36.158 EOF 00:29:36.158 )") 00:29:36.158 20:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:36.158 20:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:36.158 20:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:36.158 20:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:36.158 "params": { 00:29:36.158 "name": "Nvme1", 00:29:36.158 "trtype": "tcp", 00:29:36.158 "traddr": "10.0.0.2", 00:29:36.158 "adrfam": "ipv4", 00:29:36.158 "trsvcid": "4420", 00:29:36.158 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:36.158 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:36.158 "hdgst": false, 00:29:36.158 "ddgst": false 00:29:36.158 }, 00:29:36.158 "method": "bdev_nvme_attach_controller" 00:29:36.158 }' 00:29:36.158 [2024-12-05 20:49:29.546186] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:29:36.158 [2024-12-05 20:49:29.546230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid529378 ] 00:29:36.418 [2024-12-05 20:49:29.617133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.418 [2024-12-05 20:49:29.652162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.418 Running I/O for 15 seconds... 00:29:38.733 12140.00 IOPS, 47.42 MiB/s [2024-12-05T19:49:32.743Z] 12213.00 IOPS, 47.71 MiB/s [2024-12-05T19:49:32.743Z] 20:49:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 528911 00:29:39.302 20:49:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:39.302 [2024-12-05 20:49:32.516396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.302 [2024-12-05 20:49:32.516440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.302 [2024-12-05 20:49:32.516457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.302 [2024-12-05 20:49:32.516468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.302 [2024-12-05 20:49:32.516477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.302 [2024-12-05 20:49:32.516484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.302 [2024-12-05 20:49:32.516492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.302 [2024-12-05 20:49:32.516499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.302 [2024-12-05 20:49:32.516507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.302 [2024-12-05 20:49:32.516514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.516988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.516994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.517001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.517007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.517014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.517021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.517028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.303 [2024-12-05 20:49:32.517034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.303 [2024-12-05 20:49:32.517041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.304 [2024-12-05 20:49:32.517653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.304 [2024-12-05 20:49:32.517659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.517666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.305 [2024-12-05 20:49:32.517673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.517680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.305 [2024-12-05 20:49:32.517686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.517700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.305 [2024-12-05 20:49:32.517705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.517713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.517719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.517726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.517732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.517740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.517746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.517753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.517759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.517766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.517772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.517779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.517785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.517793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.517799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.517807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.517812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.517821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.517827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.517834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.517839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.517847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.517853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.517860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.517866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.517873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.517879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.517886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.517892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.517899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.517904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.517913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.517919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.517926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.517932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.517939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.517945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.517952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.517958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.517965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.517971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.517979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.517989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.517997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.518003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.518011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.518016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.518024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.518030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.518037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.518043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.518050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.518056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.518069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.518075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.518082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.518088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.518095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.518101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.518108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.518114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.518121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.518128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.518136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.518144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.518151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.518157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.518166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.305 [2024-12-05 20:49:32.518172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.305 [2024-12-05 20:49:32.518179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.306 [2024-12-05 20:49:32.518185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.306 [2024-12-05 20:49:32.518193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.306 [2024-12-05 20:49:32.518199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.306 [2024-12-05 20:49:32.518206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.306 [2024-12-05 20:49:32.518212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.306 [2024-12-05 20:49:32.518219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.306 [2024-12-05 20:49:32.518225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.306 [2024-12-05 20:49:32.518233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.306 [2024-12-05 20:49:32.518239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.306 [2024-12-05 20:49:32.518246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.306 [2024-12-05 20:49:32.518252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.306 [2024-12-05 20:49:32.518259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.306 [2024-12-05 20:49:32.518265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.306 [2024-12-05 20:49:32.518272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.306 [2024-12-05 20:49:32.518279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.306 [2024-12-05 20:49:32.518286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.306 [2024-12-05 20:49:32.518292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.306 [2024-12-05 20:49:32.518299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.306 [2024-12-05 20:49:32.518305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.306 [2024-12-05 20:49:32.518312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.306 [2024-12-05 20:49:32.518318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.306 [2024-12-05 20:49:32.518325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5f00 is same with the state(6) to be set 00:29:39.306 [2024-12-05 20:49:32.518333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:39.306 [2024-12-05 20:49:32.518340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:39.306 [2024-12-05 20:49:32.518346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1808 len:8 PRP1 0x0 PRP2 0x0 00:29:39.306 [2024-12-05 20:49:32.518354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.306 [2024-12-05 20:49:32.520966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.306 [2024-12-05 20:49:32.521019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.306 [2024-12-05 20:49:32.521513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.306 [2024-12-05 20:49:32.521528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.306 [2024-12-05 20:49:32.521536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.306 [2024-12-05 20:49:32.521723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.306 [2024-12-05 20:49:32.521907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.306 [2024-12-05 20:49:32.521914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.306 [2024-12-05 20:49:32.521922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.306 [2024-12-05 20:49:32.521929] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.306 [2024-12-05 20:49:32.534372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.306 [2024-12-05 20:49:32.534854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.306 [2024-12-05 20:49:32.534872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.306 [2024-12-05 20:49:32.534879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.306 [2024-12-05 20:49:32.535063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.306 [2024-12-05 20:49:32.535244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.306 [2024-12-05 20:49:32.535252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.306 [2024-12-05 20:49:32.535258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.306 [2024-12-05 20:49:32.535264] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.306 [2024-12-05 20:49:32.547440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.306 [2024-12-05 20:49:32.547835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.306 [2024-12-05 20:49:32.547851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.306 [2024-12-05 20:49:32.547858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.306 [2024-12-05 20:49:32.548041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.306 [2024-12-05 20:49:32.548230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.306 [2024-12-05 20:49:32.548239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.306 [2024-12-05 20:49:32.548248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.306 [2024-12-05 20:49:32.548254] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.306 [2024-12-05 20:49:32.560403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.306 [2024-12-05 20:49:32.560878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.306 [2024-12-05 20:49:32.560923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.306 [2024-12-05 20:49:32.560947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.306 [2024-12-05 20:49:32.561483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.306 [2024-12-05 20:49:32.561663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.306 [2024-12-05 20:49:32.561671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.306 [2024-12-05 20:49:32.561677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.306 [2024-12-05 20:49:32.561683] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.306 [2024-12-05 20:49:32.573409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.306 [2024-12-05 20:49:32.573768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.306 [2024-12-05 20:49:32.573783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.306 [2024-12-05 20:49:32.573791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.306 [2024-12-05 20:49:32.573969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.306 [2024-12-05 20:49:32.574153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.306 [2024-12-05 20:49:32.574162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.306 [2024-12-05 20:49:32.574168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.306 [2024-12-05 20:49:32.574173] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.306 [2024-12-05 20:49:32.586479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.306 [2024-12-05 20:49:32.586843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.306 [2024-12-05 20:49:32.586858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.306 [2024-12-05 20:49:32.586865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.306 [2024-12-05 20:49:32.587043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.306 [2024-12-05 20:49:32.587230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.306 [2024-12-05 20:49:32.587239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.306 [2024-12-05 20:49:32.587245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.306 [2024-12-05 20:49:32.587250] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.306 [2024-12-05 20:49:32.599485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.306 [2024-12-05 20:49:32.599849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.307 [2024-12-05 20:49:32.599864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.307 [2024-12-05 20:49:32.599870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.307 [2024-12-05 20:49:32.600049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.307 [2024-12-05 20:49:32.600234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.307 [2024-12-05 20:49:32.600242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.307 [2024-12-05 20:49:32.600247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.307 [2024-12-05 20:49:32.600253] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.307 [2024-12-05 20:49:32.612570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.307 [2024-12-05 20:49:32.613004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.307 [2024-12-05 20:49:32.613019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.307 [2024-12-05 20:49:32.613026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.307 [2024-12-05 20:49:32.613209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.307 [2024-12-05 20:49:32.613387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.307 [2024-12-05 20:49:32.613395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.307 [2024-12-05 20:49:32.613401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.307 [2024-12-05 20:49:32.613407] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.307 [2024-12-05 20:49:32.625565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.307 [2024-12-05 20:49:32.625940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.307 [2024-12-05 20:49:32.625985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.307 [2024-12-05 20:49:32.626008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.307 [2024-12-05 20:49:32.626651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.307 [2024-12-05 20:49:32.626831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.307 [2024-12-05 20:49:32.626839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.307 [2024-12-05 20:49:32.626845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.307 [2024-12-05 20:49:32.626850] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.307 [2024-12-05 20:49:32.638644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.307 [2024-12-05 20:49:32.639029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.307 [2024-12-05 20:49:32.639085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.307 [2024-12-05 20:49:32.639118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.307 [2024-12-05 20:49:32.639661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.307 [2024-12-05 20:49:32.639840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.307 [2024-12-05 20:49:32.639848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.307 [2024-12-05 20:49:32.639853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.307 [2024-12-05 20:49:32.639859] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.307 [2024-12-05 20:49:32.654465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.307 [2024-12-05 20:49:32.654972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.307 [2024-12-05 20:49:32.655017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.307 [2024-12-05 20:49:32.655040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.307 [2024-12-05 20:49:32.655645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.307 [2024-12-05 20:49:32.655939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.307 [2024-12-05 20:49:32.655952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.307 [2024-12-05 20:49:32.655961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.307 [2024-12-05 20:49:32.655970] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.307 [2024-12-05 20:49:32.667863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.307 [2024-12-05 20:49:32.668279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.307 [2024-12-05 20:49:32.668296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.307 [2024-12-05 20:49:32.668303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.307 [2024-12-05 20:49:32.668497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.307 [2024-12-05 20:49:32.668691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.307 [2024-12-05 20:49:32.668700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.307 [2024-12-05 20:49:32.668706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.307 [2024-12-05 20:49:32.668712] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.307 [2024-12-05 20:49:32.680928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.307 [2024-12-05 20:49:32.681335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.307 [2024-12-05 20:49:32.681351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.307 [2024-12-05 20:49:32.681358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.307 [2024-12-05 20:49:32.681535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.307 [2024-12-05 20:49:32.681717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.307 [2024-12-05 20:49:32.681724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.307 [2024-12-05 20:49:32.681730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.307 [2024-12-05 20:49:32.681736] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.307 [2024-12-05 20:49:32.694102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.307 [2024-12-05 20:49:32.694571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.307 [2024-12-05 20:49:32.694614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.307 [2024-12-05 20:49:32.694637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.307 [2024-12-05 20:49:32.695102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.307 [2024-12-05 20:49:32.695286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.307 [2024-12-05 20:49:32.695294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.307 [2024-12-05 20:49:32.695301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.308 [2024-12-05 20:49:32.695306] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.308 [2024-12-05 20:49:32.707202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.308 [2024-12-05 20:49:32.707737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.308 [2024-12-05 20:49:32.707755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.308 [2024-12-05 20:49:32.707763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.308 [2024-12-05 20:49:32.707947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.308 [2024-12-05 20:49:32.708135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.308 [2024-12-05 20:49:32.708143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.308 [2024-12-05 20:49:32.708149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.308 [2024-12-05 20:49:32.708155] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.308 [2024-12-05 20:49:32.720328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.308 [2024-12-05 20:49:32.720692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.308 [2024-12-05 20:49:32.720707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.308 [2024-12-05 20:49:32.720714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.308 [2024-12-05 20:49:32.720892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.308 [2024-12-05 20:49:32.721079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.308 [2024-12-05 20:49:32.721087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.308 [2024-12-05 20:49:32.721096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.308 [2024-12-05 20:49:32.721102] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.308 [2024-12-05 20:49:32.733304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.308 [2024-12-05 20:49:32.733738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.308 [2024-12-05 20:49:32.733753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.308 [2024-12-05 20:49:32.733760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.308 [2024-12-05 20:49:32.733930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.308 [2024-12-05 20:49:32.734124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.308 [2024-12-05 20:49:32.734132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.308 [2024-12-05 20:49:32.734138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.308 [2024-12-05 20:49:32.734143] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.568 [2024-12-05 20:49:32.746357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.568 [2024-12-05 20:49:32.746830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-12-05 20:49:32.746874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.568 [2024-12-05 20:49:32.746898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.568 [2024-12-05 20:49:32.747581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.568 [2024-12-05 20:49:32.748096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.568 [2024-12-05 20:49:32.748104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.568 [2024-12-05 20:49:32.748110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.568 [2024-12-05 20:49:32.748116] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.568 [2024-12-05 20:49:32.759485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.568 [2024-12-05 20:49:32.759800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-12-05 20:49:32.759815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.568 [2024-12-05 20:49:32.759822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.568 [2024-12-05 20:49:32.759991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.568 [2024-12-05 20:49:32.760186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.568 [2024-12-05 20:49:32.760194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.568 [2024-12-05 20:49:32.760200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.568 [2024-12-05 20:49:32.760205] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.568 [2024-12-05 20:49:32.772550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.568 [2024-12-05 20:49:32.773008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-12-05 20:49:32.773024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.568 [2024-12-05 20:49:32.773030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.568 [2024-12-05 20:49:32.773233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.568 [2024-12-05 20:49:32.773418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.568 [2024-12-05 20:49:32.773426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.568 [2024-12-05 20:49:32.773431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.568 [2024-12-05 20:49:32.773437] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.568 [2024-12-05 20:49:32.785744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.568 [2024-12-05 20:49:32.786162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-12-05 20:49:32.786178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.568 [2024-12-05 20:49:32.786185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.568 [2024-12-05 20:49:32.786368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.568 [2024-12-05 20:49:32.786551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.568 [2024-12-05 20:49:32.786559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.568 [2024-12-05 20:49:32.786565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.569 [2024-12-05 20:49:32.786571] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.569 [2024-12-05 20:49:32.798973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.569 [2024-12-05 20:49:32.799395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-12-05 20:49:32.799411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.569 [2024-12-05 20:49:32.799418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.569 [2024-12-05 20:49:32.799596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.569 [2024-12-05 20:49:32.799775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.569 [2024-12-05 20:49:32.799783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.569 [2024-12-05 20:49:32.799788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.569 [2024-12-05 20:49:32.799794] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.569 [2024-12-05 20:49:32.812146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.569 [2024-12-05 20:49:32.812610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-12-05 20:49:32.812626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.569 [2024-12-05 20:49:32.812657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.569 [2024-12-05 20:49:32.813344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.569 [2024-12-05 20:49:32.813601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.569 [2024-12-05 20:49:32.813609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.569 [2024-12-05 20:49:32.813615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.569 [2024-12-05 20:49:32.813620] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.569 10955.67 IOPS, 42.80 MiB/s [2024-12-05T19:49:33.010Z] [2024-12-05 20:49:32.825213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.569 [2024-12-05 20:49:32.825554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-12-05 20:49:32.825569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.569 [2024-12-05 20:49:32.825576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.569 [2024-12-05 20:49:32.825745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.569 [2024-12-05 20:49:32.825936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.569 [2024-12-05 20:49:32.825944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.569 [2024-12-05 20:49:32.825950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.569 [2024-12-05 20:49:32.825955] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.569 [2024-12-05 20:49:32.838217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.569 [2024-12-05 20:49:32.838658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-12-05 20:49:32.838673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.569 [2024-12-05 20:49:32.838680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.569 [2024-12-05 20:49:32.838850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.569 [2024-12-05 20:49:32.839019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.569 [2024-12-05 20:49:32.839026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.569 [2024-12-05 20:49:32.839032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.569 [2024-12-05 20:49:32.839037] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.569 [2024-12-05 20:49:32.851180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.569 [2024-12-05 20:49:32.851507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-12-05 20:49:32.851560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.569 [2024-12-05 20:49:32.851584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.569 [2024-12-05 20:49:32.852243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.569 [2024-12-05 20:49:32.852698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.569 [2024-12-05 20:49:32.852716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.569 [2024-12-05 20:49:32.852730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.569 [2024-12-05 20:49:32.852743] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.569 [2024-12-05 20:49:32.866879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.569 [2024-12-05 20:49:32.867443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-12-05 20:49:32.867465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.569 [2024-12-05 20:49:32.867475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.569 [2024-12-05 20:49:32.867767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.569 [2024-12-05 20:49:32.868069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.569 [2024-12-05 20:49:32.868081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.569 [2024-12-05 20:49:32.868090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.569 [2024-12-05 20:49:32.868100] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.569 [2024-12-05 20:49:32.880430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.569 [2024-12-05 20:49:32.880902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-12-05 20:49:32.880919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.569 [2024-12-05 20:49:32.880926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.569 [2024-12-05 20:49:32.881131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.569 [2024-12-05 20:49:32.881332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.569 [2024-12-05 20:49:32.881340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.569 [2024-12-05 20:49:32.881347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.569 [2024-12-05 20:49:32.881353] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.569 [2024-12-05 20:49:32.893413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.569 [2024-12-05 20:49:32.893759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-12-05 20:49:32.893774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.569 [2024-12-05 20:49:32.893781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.569 [2024-12-05 20:49:32.893958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.569 [2024-12-05 20:49:32.894144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.569 [2024-12-05 20:49:32.894152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.569 [2024-12-05 20:49:32.894161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.569 [2024-12-05 20:49:32.894167] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.569 [2024-12-05 20:49:32.906481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.569 [2024-12-05 20:49:32.906918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-12-05 20:49:32.906951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.569 [2024-12-05 20:49:32.906977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.569 [2024-12-05 20:49:32.907611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.569 [2024-12-05 20:49:32.907790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.569 [2024-12-05 20:49:32.907797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.569 [2024-12-05 20:49:32.907803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.569 [2024-12-05 20:49:32.907809] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.569 [2024-12-05 20:49:32.919445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.570 [2024-12-05 20:49:32.919879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-12-05 20:49:32.919895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.570 [2024-12-05 20:49:32.919902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.570 [2024-12-05 20:49:32.920500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.570 [2024-12-05 20:49:32.920680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.570 [2024-12-05 20:49:32.920687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.570 [2024-12-05 20:49:32.920693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.570 [2024-12-05 20:49:32.920698] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.570 [2024-12-05 20:49:32.932471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.570 [2024-12-05 20:49:32.932875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-12-05 20:49:32.932890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.570 [2024-12-05 20:49:32.932896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.570 [2024-12-05 20:49:32.933071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.570 [2024-12-05 20:49:32.933264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.570 [2024-12-05 20:49:32.933272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.570 [2024-12-05 20:49:32.933278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.570 [2024-12-05 20:49:32.933283] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.570 [2024-12-05 20:49:32.945619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.570 [2024-12-05 20:49:32.946097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-12-05 20:49:32.946141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.570 [2024-12-05 20:49:32.946164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.570 [2024-12-05 20:49:32.946609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.570 [2024-12-05 20:49:32.946788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.570 [2024-12-05 20:49:32.946795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.570 [2024-12-05 20:49:32.946801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.570 [2024-12-05 20:49:32.946807] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.570 [2024-12-05 20:49:32.958591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.570 [2024-12-05 20:49:32.958973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-12-05 20:49:32.958988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.570 [2024-12-05 20:49:32.958994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.570 [2024-12-05 20:49:32.959189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.570 [2024-12-05 20:49:32.959369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.570 [2024-12-05 20:49:32.959376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.570 [2024-12-05 20:49:32.959382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.570 [2024-12-05 20:49:32.959388] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.570 [2024-12-05 20:49:32.971641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.570 [2024-12-05 20:49:32.972081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-12-05 20:49:32.972096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.570 [2024-12-05 20:49:32.972103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.570 [2024-12-05 20:49:32.972272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.570 [2024-12-05 20:49:32.972442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.570 [2024-12-05 20:49:32.972449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.570 [2024-12-05 20:49:32.972454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.570 [2024-12-05 20:49:32.972460] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.570 [2024-12-05 20:49:32.984708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.570 [2024-12-05 20:49:32.985143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-12-05 20:49:32.985158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.570 [2024-12-05 20:49:32.985167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.570 [2024-12-05 20:49:32.985337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.570 [2024-12-05 20:49:32.985506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.570 [2024-12-05 20:49:32.985514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.570 [2024-12-05 20:49:32.985519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.570 [2024-12-05 20:49:32.985524] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.570 [2024-12-05 20:49:32.997892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.570 [2024-12-05 20:49:32.998345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-12-05 20:49:32.998390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.570 [2024-12-05 20:49:32.998413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.570 [2024-12-05 20:49:32.998852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.570 [2024-12-05 20:49:32.999032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.570 [2024-12-05 20:49:32.999039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.570 [2024-12-05 20:49:32.999045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.570 [2024-12-05 20:49:32.999051] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.831 [2024-12-05 20:49:33.010891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.831 [2024-12-05 20:49:33.011354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.831 [2024-12-05 20:49:33.011398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.831 [2024-12-05 20:49:33.011422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.831 [2024-12-05 20:49:33.012033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.831 [2024-12-05 20:49:33.012218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.831 [2024-12-05 20:49:33.012226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.831 [2024-12-05 20:49:33.012232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.831 [2024-12-05 20:49:33.012237] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.831 [2024-12-05 20:49:33.023882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.831 [2024-12-05 20:49:33.024347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.831 [2024-12-05 20:49:33.024364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.831 [2024-12-05 20:49:33.024371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.831 [2024-12-05 20:49:33.024554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.831 [2024-12-05 20:49:33.024744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.831 [2024-12-05 20:49:33.024752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.831 [2024-12-05 20:49:33.024758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.831 [2024-12-05 20:49:33.024764] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.831 [2024-12-05 20:49:33.036977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.831 [2024-12-05 20:49:33.037336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.831 [2024-12-05 20:49:33.037354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.831 [2024-12-05 20:49:33.037360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.831 [2024-12-05 20:49:33.037538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.831 [2024-12-05 20:49:33.037718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.831 [2024-12-05 20:49:33.037725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.831 [2024-12-05 20:49:33.037731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.831 [2024-12-05 20:49:33.037736] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.831 [2024-12-05 20:49:33.049970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.831 [2024-12-05 20:49:33.050407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.831 [2024-12-05 20:49:33.050422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.831 [2024-12-05 20:49:33.050428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.831 [2024-12-05 20:49:33.050607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.831 [2024-12-05 20:49:33.050785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.831 [2024-12-05 20:49:33.050793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.831 [2024-12-05 20:49:33.050799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.831 [2024-12-05 20:49:33.050804] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.831 [2024-12-05 20:49:33.062964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.831 [2024-12-05 20:49:33.063421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.831 [2024-12-05 20:49:33.063452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.831 [2024-12-05 20:49:33.063475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.831 [2024-12-05 20:49:33.064047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.831 [2024-12-05 20:49:33.064232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.831 [2024-12-05 20:49:33.064240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.831 [2024-12-05 20:49:33.064249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.831 [2024-12-05 20:49:33.064255] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.831 [2024-12-05 20:49:33.075894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.831 [2024-12-05 20:49:33.076350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.831 [2024-12-05 20:49:33.076365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.831 [2024-12-05 20:49:33.076372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.831 [2024-12-05 20:49:33.076550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.831 [2024-12-05 20:49:33.076733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.831 [2024-12-05 20:49:33.076740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.831 [2024-12-05 20:49:33.076746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.831 [2024-12-05 20:49:33.076752] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.831 [2024-12-05 20:49:33.088850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.831 [2024-12-05 20:49:33.089309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.831 [2024-12-05 20:49:33.089325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.831 [2024-12-05 20:49:33.089331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.831 [2024-12-05 20:49:33.089510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.831 [2024-12-05 20:49:33.089688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.831 [2024-12-05 20:49:33.089696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.831 [2024-12-05 20:49:33.089701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.831 [2024-12-05 20:49:33.089707] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.831 [2024-12-05 20:49:33.101901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.831 [2024-12-05 20:49:33.102268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.831 [2024-12-05 20:49:33.102284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.831 [2024-12-05 20:49:33.102290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.831 [2024-12-05 20:49:33.102468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.831 [2024-12-05 20:49:33.102650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.831 [2024-12-05 20:49:33.102658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.831 [2024-12-05 20:49:33.102664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.831 [2024-12-05 20:49:33.102669] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.832 [2024-12-05 20:49:33.114938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.832 [2024-12-05 20:49:33.115407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.832 [2024-12-05 20:49:33.115452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.832 [2024-12-05 20:49:33.115475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.832 [2024-12-05 20:49:33.116140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.832 [2024-12-05 20:49:33.116590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.832 [2024-12-05 20:49:33.116607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.832 [2024-12-05 20:49:33.116622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.832 [2024-12-05 20:49:33.116635] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.832 [2024-12-05 20:49:33.130662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.832 [2024-12-05 20:49:33.131238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.832 [2024-12-05 20:49:33.131260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.832 [2024-12-05 20:49:33.131270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.832 [2024-12-05 20:49:33.131562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.832 [2024-12-05 20:49:33.131856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.832 [2024-12-05 20:49:33.131867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.832 [2024-12-05 20:49:33.131877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.832 [2024-12-05 20:49:33.131886] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.832 [2024-12-05 20:49:33.144125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.832 [2024-12-05 20:49:33.144559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.832 [2024-12-05 20:49:33.144575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.832 [2024-12-05 20:49:33.144583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.832 [2024-12-05 20:49:33.144781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.832 [2024-12-05 20:49:33.144981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.832 [2024-12-05 20:49:33.144989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.832 [2024-12-05 20:49:33.144996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.832 [2024-12-05 20:49:33.145002] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.832 [2024-12-05 20:49:33.157178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.832 [2024-12-05 20:49:33.157598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.832 [2024-12-05 20:49:33.157642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.832 [2024-12-05 20:49:33.157673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.832 [2024-12-05 20:49:33.158181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.832 [2024-12-05 20:49:33.158361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.832 [2024-12-05 20:49:33.158369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.832 [2024-12-05 20:49:33.158375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.832 [2024-12-05 20:49:33.158380] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.832 [2024-12-05 20:49:33.170161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.832 [2024-12-05 20:49:33.170584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.832 [2024-12-05 20:49:33.170629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.832 [2024-12-05 20:49:33.170652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.832 [2024-12-05 20:49:33.171252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.832 [2024-12-05 20:49:33.171432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.832 [2024-12-05 20:49:33.171440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.832 [2024-12-05 20:49:33.171445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.832 [2024-12-05 20:49:33.171451] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.832 [2024-12-05 20:49:33.183227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.832 [2024-12-05 20:49:33.183572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.832 [2024-12-05 20:49:33.183586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.832 [2024-12-05 20:49:33.183593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.832 [2024-12-05 20:49:33.183762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.832 [2024-12-05 20:49:33.183931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.832 [2024-12-05 20:49:33.183939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.832 [2024-12-05 20:49:33.183944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.832 [2024-12-05 20:49:33.183949] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.832 [2024-12-05 20:49:33.196300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.832 [2024-12-05 20:49:33.196743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.832 [2024-12-05 20:49:33.196786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.832 [2024-12-05 20:49:33.196809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.832 [2024-12-05 20:49:33.197309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.832 [2024-12-05 20:49:33.197483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.832 [2024-12-05 20:49:33.197490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.832 [2024-12-05 20:49:33.197496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.832 [2024-12-05 20:49:33.197501] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.832 [2024-12-05 20:49:33.209359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.832 [2024-12-05 20:49:33.209791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.832 [2024-12-05 20:49:33.209819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.832 [2024-12-05 20:49:33.209843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.832 [2024-12-05 20:49:33.210434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.832 [2024-12-05 20:49:33.210610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.832 [2024-12-05 20:49:33.210618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.832 [2024-12-05 20:49:33.210623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.832 [2024-12-05 20:49:33.210629] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.832 [2024-12-05 20:49:33.222322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.832 [2024-12-05 20:49:33.222641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.832 [2024-12-05 20:49:33.222656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.832 [2024-12-05 20:49:33.222662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.832 [2024-12-05 20:49:33.222832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.832 [2024-12-05 20:49:33.223001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.832 [2024-12-05 20:49:33.223009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.832 [2024-12-05 20:49:33.223014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.832 [2024-12-05 20:49:33.223019] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.832 [2024-12-05 20:49:33.235277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.832 [2024-12-05 20:49:33.235712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.832 [2024-12-05 20:49:33.235727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.832 [2024-12-05 20:49:33.235733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.833 [2024-12-05 20:49:33.235902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.833 [2024-12-05 20:49:33.236077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.833 [2024-12-05 20:49:33.236085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.833 [2024-12-05 20:49:33.236094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.833 [2024-12-05 20:49:33.236099] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.833 [2024-12-05 20:49:33.248272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.833 [2024-12-05 20:49:33.248734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.833 [2024-12-05 20:49:33.248751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.833 [2024-12-05 20:49:33.248758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.833 [2024-12-05 20:49:33.248945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.833 [2024-12-05 20:49:33.249122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.833 [2024-12-05 20:49:33.249129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.833 [2024-12-05 20:49:33.249135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.833 [2024-12-05 20:49:33.249141] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.833 [2024-12-05 20:49:33.261326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.833 [2024-12-05 20:49:33.261701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.833 [2024-12-05 20:49:33.261717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:39.833 [2024-12-05 20:49:33.261724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:39.833 [2024-12-05 20:49:33.261902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:39.833 [2024-12-05 20:49:33.262087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.833 [2024-12-05 20:49:33.262095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.833 [2024-12-05 20:49:33.262101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.833 [2024-12-05 20:49:33.262106] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.093 [2024-12-05 20:49:33.274297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.093 [2024-12-05 20:49:33.274754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.093 [2024-12-05 20:49:33.274770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.093 [2024-12-05 20:49:33.274777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.093 [2024-12-05 20:49:33.274960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.093 [2024-12-05 20:49:33.275151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.093 [2024-12-05 20:49:33.275159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.093 [2024-12-05 20:49:33.275165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.093 [2024-12-05 20:49:33.275171] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.093 [2024-12-05 20:49:33.287505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.093 [2024-12-05 20:49:33.287881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.093 [2024-12-05 20:49:33.287899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.093 [2024-12-05 20:49:33.287906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.093 [2024-12-05 20:49:33.288095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.093 [2024-12-05 20:49:33.288281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.093 [2024-12-05 20:49:33.288290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.093 [2024-12-05 20:49:33.288296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.093 [2024-12-05 20:49:33.288301] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.093 [2024-12-05 20:49:33.300739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.093 [2024-12-05 20:49:33.301198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.093 [2024-12-05 20:49:33.301214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.093 [2024-12-05 20:49:33.301222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.093 [2024-12-05 20:49:33.301405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.093 [2024-12-05 20:49:33.301590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.093 [2024-12-05 20:49:33.301598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.093 [2024-12-05 20:49:33.301604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.093 [2024-12-05 20:49:33.301610] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.093 [2024-12-05 20:49:33.313979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.093 [2024-12-05 20:49:33.314428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.093 [2024-12-05 20:49:33.314444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.093 [2024-12-05 20:49:33.314451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.093 [2024-12-05 20:49:33.314629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.093 [2024-12-05 20:49:33.314807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.093 [2024-12-05 20:49:33.314815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.093 [2024-12-05 20:49:33.314821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.093 [2024-12-05 20:49:33.314826] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.093 [2024-12-05 20:49:33.326933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.094 [2024-12-05 20:49:33.327368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.094 [2024-12-05 20:49:33.327413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.094 [2024-12-05 20:49:33.327445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.094 [2024-12-05 20:49:33.328130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.094 [2024-12-05 20:49:33.328587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.094 [2024-12-05 20:49:33.328595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.094 [2024-12-05 20:49:33.328600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.094 [2024-12-05 20:49:33.328617] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.094 [2024-12-05 20:49:33.342910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.094 [2024-12-05 20:49:33.343503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.094 [2024-12-05 20:49:33.343548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.094 [2024-12-05 20:49:33.343571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.094 [2024-12-05 20:49:33.344255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.094 [2024-12-05 20:49:33.344716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.094 [2024-12-05 20:49:33.344728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.094 [2024-12-05 20:49:33.344737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.094 [2024-12-05 20:49:33.344746] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.094 [2024-12-05 20:49:33.356385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.094 [2024-12-05 20:49:33.356842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.094 [2024-12-05 20:49:33.356858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.094 [2024-12-05 20:49:33.356865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.094 [2024-12-05 20:49:33.357064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.094 [2024-12-05 20:49:33.357259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.094 [2024-12-05 20:49:33.357267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.094 [2024-12-05 20:49:33.357273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.094 [2024-12-05 20:49:33.357279] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.094 [2024-12-05 20:49:33.369371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.094 [2024-12-05 20:49:33.369810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.094 [2024-12-05 20:49:33.369825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.094 [2024-12-05 20:49:33.369831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.094 [2024-12-05 20:49:33.370000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.094 [2024-12-05 20:49:33.370198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.094 [2024-12-05 20:49:33.370207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.094 [2024-12-05 20:49:33.370213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.094 [2024-12-05 20:49:33.370218] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.094 [2024-12-05 20:49:33.382410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.094 [2024-12-05 20:49:33.382842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.094 [2024-12-05 20:49:33.382857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.094 [2024-12-05 20:49:33.382863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.094 [2024-12-05 20:49:33.383032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.094 [2024-12-05 20:49:33.383229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.094 [2024-12-05 20:49:33.383238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.094 [2024-12-05 20:49:33.383243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.094 [2024-12-05 20:49:33.383249] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.094 [2024-12-05 20:49:33.395347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.094 [2024-12-05 20:49:33.395815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.094 [2024-12-05 20:49:33.395860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.094 [2024-12-05 20:49:33.395883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.094 [2024-12-05 20:49:33.396566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.094 [2024-12-05 20:49:33.396761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.094 [2024-12-05 20:49:33.396768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.094 [2024-12-05 20:49:33.396774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.094 [2024-12-05 20:49:33.396779] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.094 [2024-12-05 20:49:33.408330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.094 [2024-12-05 20:49:33.408776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.094 [2024-12-05 20:49:33.408820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.094 [2024-12-05 20:49:33.408843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.094 [2024-12-05 20:49:33.409525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.094 [2024-12-05 20:49:33.409892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.094 [2024-12-05 20:49:33.409899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.094 [2024-12-05 20:49:33.409908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.094 [2024-12-05 20:49:33.409914] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.094 [2024-12-05 20:49:33.424220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.094 [2024-12-05 20:49:33.424797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.094 [2024-12-05 20:49:33.424843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.094 [2024-12-05 20:49:33.424866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.094 [2024-12-05 20:49:33.425554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.094 [2024-12-05 20:49:33.425848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.094 [2024-12-05 20:49:33.425859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.094 [2024-12-05 20:49:33.425868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.094 [2024-12-05 20:49:33.425877] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.094 [2024-12-05 20:49:33.437605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.094 [2024-12-05 20:49:33.437957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.094 [2024-12-05 20:49:33.437972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.094 [2024-12-05 20:49:33.437980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.094 [2024-12-05 20:49:33.438178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.094 [2024-12-05 20:49:33.438372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.094 [2024-12-05 20:49:33.438380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.094 [2024-12-05 20:49:33.438387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.094 [2024-12-05 20:49:33.438392] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.094 [2024-12-05 20:49:33.450610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.094 [2024-12-05 20:49:33.451039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.094 [2024-12-05 20:49:33.451054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.094 [2024-12-05 20:49:33.451091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.095 [2024-12-05 20:49:33.451652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.095 [2024-12-05 20:49:33.451831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.095 [2024-12-05 20:49:33.451839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.095 [2024-12-05 20:49:33.451844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.095 [2024-12-05 20:49:33.451849] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.095 [2024-12-05 20:49:33.463635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.095 [2024-12-05 20:49:33.464073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.095 [2024-12-05 20:49:33.464088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.095 [2024-12-05 20:49:33.464094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.095 [2024-12-05 20:49:33.464263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.095 [2024-12-05 20:49:33.464433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.095 [2024-12-05 20:49:33.464440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.095 [2024-12-05 20:49:33.464446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.095 [2024-12-05 20:49:33.464451] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.095 [2024-12-05 20:49:33.476584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.095 [2024-12-05 20:49:33.476992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.095 [2024-12-05 20:49:33.477006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.095 [2024-12-05 20:49:33.477013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.095 [2024-12-05 20:49:33.477209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.095 [2024-12-05 20:49:33.477389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.095 [2024-12-05 20:49:33.477397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.095 [2024-12-05 20:49:33.477403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.095 [2024-12-05 20:49:33.477408] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.095 [2024-12-05 20:49:33.489544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.095 [2024-12-05 20:49:33.489904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.095 [2024-12-05 20:49:33.489920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.095 [2024-12-05 20:49:33.489926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.095 [2024-12-05 20:49:33.490110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.095 [2024-12-05 20:49:33.490289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.095 [2024-12-05 20:49:33.490297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.095 [2024-12-05 20:49:33.490302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.095 [2024-12-05 20:49:33.490308] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.095 [2024-12-05 20:49:33.502468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.095 [2024-12-05 20:49:33.502879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.095 [2024-12-05 20:49:33.502893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.095 [2024-12-05 20:49:33.502902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.095 [2024-12-05 20:49:33.503077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.095 [2024-12-05 20:49:33.503271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.095 [2024-12-05 20:49:33.503278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.095 [2024-12-05 20:49:33.503284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.095 [2024-12-05 20:49:33.503289] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.095 [2024-12-05 20:49:33.515594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.095 [2024-12-05 20:49:33.516049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.095 [2024-12-05 20:49:33.516069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.095 [2024-12-05 20:49:33.516076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.095 [2024-12-05 20:49:33.516255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.095 [2024-12-05 20:49:33.516434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.095 [2024-12-05 20:49:33.516441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.095 [2024-12-05 20:49:33.516447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.095 [2024-12-05 20:49:33.516453] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.095 [2024-12-05 20:49:33.528703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.095 [2024-12-05 20:49:33.529203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.095 [2024-12-05 20:49:33.529218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.095 [2024-12-05 20:49:33.529225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.095 [2024-12-05 20:49:33.529408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.095 [2024-12-05 20:49:33.529591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.095 [2024-12-05 20:49:33.529599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.095 [2024-12-05 20:49:33.529605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.095 [2024-12-05 20:49:33.529611] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.356 [2024-12-05 20:49:33.541778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.356 [2024-12-05 20:49:33.542153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-12-05 20:49:33.542169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.356 [2024-12-05 20:49:33.542176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.356 [2024-12-05 20:49:33.542355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.356 [2024-12-05 20:49:33.542538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.356 [2024-12-05 20:49:33.542546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.356 [2024-12-05 20:49:33.542551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.356 [2024-12-05 20:49:33.542557] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.356 [2024-12-05 20:49:33.554913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.356 [2024-12-05 20:49:33.555318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-12-05 20:49:33.555335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.356 [2024-12-05 20:49:33.555342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.356 [2024-12-05 20:49:33.555519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.356 [2024-12-05 20:49:33.555701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.356 [2024-12-05 20:49:33.555709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.356 [2024-12-05 20:49:33.555714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.356 [2024-12-05 20:49:33.555720] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.356 [2024-12-05 20:49:33.567928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.356 [2024-12-05 20:49:33.568386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-12-05 20:49:33.568401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.356 [2024-12-05 20:49:33.568408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.356 [2024-12-05 20:49:33.568586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.356 [2024-12-05 20:49:33.568764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.356 [2024-12-05 20:49:33.568772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.356 [2024-12-05 20:49:33.568778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.356 [2024-12-05 20:49:33.568783] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.356 [2024-12-05 20:49:33.580886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.356 [2024-12-05 20:49:33.581318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-12-05 20:49:33.581334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.356 [2024-12-05 20:49:33.581341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.356 [2024-12-05 20:49:33.581518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.356 [2024-12-05 20:49:33.581697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.356 [2024-12-05 20:49:33.581704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.356 [2024-12-05 20:49:33.581713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.356 [2024-12-05 20:49:33.581719] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.356 [2024-12-05 20:49:33.593816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.356 [2024-12-05 20:49:33.594227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-12-05 20:49:33.594242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.356 [2024-12-05 20:49:33.594249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.356 [2024-12-05 20:49:33.594426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.356 [2024-12-05 20:49:33.594605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.356 [2024-12-05 20:49:33.594613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.356 [2024-12-05 20:49:33.594618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.356 [2024-12-05 20:49:33.594624] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.356 [2024-12-05 20:49:33.606796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.356 [2024-12-05 20:49:33.607204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-12-05 20:49:33.607218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.356 [2024-12-05 20:49:33.607224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.356 [2024-12-05 20:49:33.607393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.356 [2024-12-05 20:49:33.607563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.356 [2024-12-05 20:49:33.607570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.356 [2024-12-05 20:49:33.607576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.356 [2024-12-05 20:49:33.607581] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.356 [2024-12-05 20:49:33.619779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.356 [2024-12-05 20:49:33.620237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-12-05 20:49:33.620253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.356 [2024-12-05 20:49:33.620259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.356 [2024-12-05 20:49:33.620440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.356 [2024-12-05 20:49:33.620611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.357 [2024-12-05 20:49:33.620619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.357 [2024-12-05 20:49:33.620625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.357 [2024-12-05 20:49:33.620630] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.357 [2024-12-05 20:49:33.632721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.357 [2024-12-05 20:49:33.633184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-12-05 20:49:33.633200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.357 [2024-12-05 20:49:33.633206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.357 [2024-12-05 20:49:33.633384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.357 [2024-12-05 20:49:33.633566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.357 [2024-12-05 20:49:33.633574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.357 [2024-12-05 20:49:33.633579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.357 [2024-12-05 20:49:33.633585] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.357 [2024-12-05 20:49:33.645704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.357 [2024-12-05 20:49:33.646144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-12-05 20:49:33.646188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.357 [2024-12-05 20:49:33.646211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.357 [2024-12-05 20:49:33.646881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.357 [2024-12-05 20:49:33.647310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.357 [2024-12-05 20:49:33.647318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.357 [2024-12-05 20:49:33.647324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.357 [2024-12-05 20:49:33.647329] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.357 [2024-12-05 20:49:33.658631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.357 [2024-12-05 20:49:33.659074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-12-05 20:49:33.659089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.357 [2024-12-05 20:49:33.659096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.357 [2024-12-05 20:49:33.659274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.357 [2024-12-05 20:49:33.659453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.357 [2024-12-05 20:49:33.659460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.357 [2024-12-05 20:49:33.659466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.357 [2024-12-05 20:49:33.659471] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.357 [2024-12-05 20:49:33.671573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.357 [2024-12-05 20:49:33.671986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-12-05 20:49:33.672002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.357 [2024-12-05 20:49:33.672014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.357 [2024-12-05 20:49:33.672212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.357 [2024-12-05 20:49:33.672391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.357 [2024-12-05 20:49:33.672398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.357 [2024-12-05 20:49:33.672404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.357 [2024-12-05 20:49:33.672410] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.357 [2024-12-05 20:49:33.684613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.357 [2024-12-05 20:49:33.685027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-12-05 20:49:33.685042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.357 [2024-12-05 20:49:33.685049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.357 [2024-12-05 20:49:33.685231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.357 [2024-12-05 20:49:33.685411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.357 [2024-12-05 20:49:33.685419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.357 [2024-12-05 20:49:33.685424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.357 [2024-12-05 20:49:33.685430] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.357 [2024-12-05 20:49:33.697708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.357 [2024-12-05 20:49:33.698197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-12-05 20:49:33.698240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.357 [2024-12-05 20:49:33.698263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.357 [2024-12-05 20:49:33.698931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.357 [2024-12-05 20:49:33.699476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.357 [2024-12-05 20:49:33.699485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.357 [2024-12-05 20:49:33.699490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.357 [2024-12-05 20:49:33.699496] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.357 [2024-12-05 20:49:33.710725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.357 [2024-12-05 20:49:33.711183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-12-05 20:49:33.711199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.357 [2024-12-05 20:49:33.711207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.357 [2024-12-05 20:49:33.711387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.357 [2024-12-05 20:49:33.711569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.357 [2024-12-05 20:49:33.711577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.357 [2024-12-05 20:49:33.711583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.357 [2024-12-05 20:49:33.711588] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.357 [2024-12-05 20:49:33.723796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.357 [2024-12-05 20:49:33.724174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-12-05 20:49:33.724190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.357 [2024-12-05 20:49:33.724196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.357 [2024-12-05 20:49:33.724375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.357 [2024-12-05 20:49:33.724555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.357 [2024-12-05 20:49:33.724563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.357 [2024-12-05 20:49:33.724568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.357 [2024-12-05 20:49:33.724574] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.357 [2024-12-05 20:49:33.736858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.357 [2024-12-05 20:49:33.737231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-12-05 20:49:33.737247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.357 [2024-12-05 20:49:33.737254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.357 [2024-12-05 20:49:33.737432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.357 [2024-12-05 20:49:33.737610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.357 [2024-12-05 20:49:33.737618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.357 [2024-12-05 20:49:33.737624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.358 [2024-12-05 20:49:33.737630] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.358 [2024-12-05 20:49:33.750034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.358 [2024-12-05 20:49:33.750492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-12-05 20:49:33.750508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.358 [2024-12-05 20:49:33.750515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.358 [2024-12-05 20:49:33.750693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.358 [2024-12-05 20:49:33.750872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.358 [2024-12-05 20:49:33.750880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.358 [2024-12-05 20:49:33.750889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.358 [2024-12-05 20:49:33.750896] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.358 [2024-12-05 20:49:33.763266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.358 [2024-12-05 20:49:33.763569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-12-05 20:49:33.763615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.358 [2024-12-05 20:49:33.763638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.358 [2024-12-05 20:49:33.764319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.358 [2024-12-05 20:49:33.764716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.358 [2024-12-05 20:49:33.764723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.358 [2024-12-05 20:49:33.764729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.358 [2024-12-05 20:49:33.764735] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.358 [2024-12-05 20:49:33.776432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.358 [2024-12-05 20:49:33.776871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-12-05 20:49:33.776914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.358 [2024-12-05 20:49:33.776937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.358 [2024-12-05 20:49:33.777509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.358 [2024-12-05 20:49:33.777689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.358 [2024-12-05 20:49:33.777697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.358 [2024-12-05 20:49:33.777702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.358 [2024-12-05 20:49:33.777708] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.358 [2024-12-05 20:49:33.789542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.358 [2024-12-05 20:49:33.789999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-12-05 20:49:33.790015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.358 [2024-12-05 20:49:33.790022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.358 [2024-12-05 20:49:33.790208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.358 [2024-12-05 20:49:33.790392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.358 [2024-12-05 20:49:33.790400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.358 [2024-12-05 20:49:33.790406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.358 [2024-12-05 20:49:33.790411] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.619 [2024-12-05 20:49:33.802739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.619 [2024-12-05 20:49:33.803155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.619 [2024-12-05 20:49:33.803171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.619 [2024-12-05 20:49:33.803177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.619 [2024-12-05 20:49:33.803360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.619 [2024-12-05 20:49:33.803545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.619 [2024-12-05 20:49:33.803552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.619 [2024-12-05 20:49:33.803558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.619 [2024-12-05 20:49:33.803564] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.619 [2024-12-05 20:49:33.816048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.619 [2024-12-05 20:49:33.816414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.619 [2024-12-05 20:49:33.816430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.619 [2024-12-05 20:49:33.816436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.619 [2024-12-05 20:49:33.816619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.619 [2024-12-05 20:49:33.816802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.619 [2024-12-05 20:49:33.816810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.619 [2024-12-05 20:49:33.816816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.619 [2024-12-05 20:49:33.816821] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.619 8216.75 IOPS, 32.10 MiB/s [2024-12-05T19:49:34.060Z] [2024-12-05 20:49:33.829254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.619 [2024-12-05 20:49:33.829673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.619 [2024-12-05 20:49:33.829688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.619 [2024-12-05 20:49:33.829696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.619 [2024-12-05 20:49:33.829879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.619 [2024-12-05 20:49:33.830070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.619 [2024-12-05 20:49:33.830078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.619 [2024-12-05 20:49:33.830084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.619 [2024-12-05 20:49:33.830090] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.619 [2024-12-05 20:49:33.842404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.619 [2024-12-05 20:49:33.842905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.619 [2024-12-05 20:49:33.842949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.619 [2024-12-05 20:49:33.842980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.619 [2024-12-05 20:49:33.843494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.619 [2024-12-05 20:49:33.843674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.619 [2024-12-05 20:49:33.843681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.619 [2024-12-05 20:49:33.843687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.619 [2024-12-05 20:49:33.843692] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.619 [2024-12-05 20:49:33.855544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.619 [2024-12-05 20:49:33.855977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.619 [2024-12-05 20:49:33.855992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.619 [2024-12-05 20:49:33.855999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.619 [2024-12-05 20:49:33.856184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.619 [2024-12-05 20:49:33.856364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.619 [2024-12-05 20:49:33.856371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.619 [2024-12-05 20:49:33.856377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.619 [2024-12-05 20:49:33.856383] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.619 [2024-12-05 20:49:33.868702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.619 [2024-12-05 20:49:33.869091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.619 [2024-12-05 20:49:33.869107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.619 [2024-12-05 20:49:33.869114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.619 [2024-12-05 20:49:33.869292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.619 [2024-12-05 20:49:33.869471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.619 [2024-12-05 20:49:33.869479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.619 [2024-12-05 20:49:33.869484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.619 [2024-12-05 20:49:33.869490] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.619 [2024-12-05 20:49:33.881766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.619 [2024-12-05 20:49:33.882203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.619 [2024-12-05 20:49:33.882219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.619 [2024-12-05 20:49:33.882225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.619 [2024-12-05 20:49:33.882405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.619 [2024-12-05 20:49:33.882578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.619 [2024-12-05 20:49:33.882586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.619 [2024-12-05 20:49:33.882591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.619 [2024-12-05 20:49:33.882596] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.619 [2024-12-05 20:49:33.894786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.619 [2024-12-05 20:49:33.895212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.619 [2024-12-05 20:49:33.895228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.619 [2024-12-05 20:49:33.895234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.619 [2024-12-05 20:49:33.895412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.619 [2024-12-05 20:49:33.895591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.619 [2024-12-05 20:49:33.895598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.619 [2024-12-05 20:49:33.895604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.619 [2024-12-05 20:49:33.895610] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.619 [2024-12-05 20:49:33.907814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.619 [2024-12-05 20:49:33.908293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.619 [2024-12-05 20:49:33.908349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.619 [2024-12-05 20:49:33.908373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.620 [2024-12-05 20:49:33.908944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.620 [2024-12-05 20:49:33.909128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.620 [2024-12-05 20:49:33.909136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.620 [2024-12-05 20:49:33.909142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.620 [2024-12-05 20:49:33.909147] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.620 [2024-12-05 20:49:33.920783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.620 [2024-12-05 20:49:33.921262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.620 [2024-12-05 20:49:33.921278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.620 [2024-12-05 20:49:33.921285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.620 [2024-12-05 20:49:33.921463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.620 [2024-12-05 20:49:33.921641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.620 [2024-12-05 20:49:33.921649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.620 [2024-12-05 20:49:33.921658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.620 [2024-12-05 20:49:33.921663] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.620 [2024-12-05 20:49:33.933809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.620 [2024-12-05 20:49:33.934167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.620 [2024-12-05 20:49:33.934182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.620 [2024-12-05 20:49:33.934189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.620 [2024-12-05 20:49:33.934371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.620 [2024-12-05 20:49:33.934540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.620 [2024-12-05 20:49:33.934548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.620 [2024-12-05 20:49:33.934553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.620 [2024-12-05 20:49:33.934558] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.620 [2024-12-05 20:49:33.946821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.620 [2024-12-05 20:49:33.947221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.620 [2024-12-05 20:49:33.947237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.620 [2024-12-05 20:49:33.947243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.620 [2024-12-05 20:49:33.947422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.620 [2024-12-05 20:49:33.947603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.620 [2024-12-05 20:49:33.947611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.620 [2024-12-05 20:49:33.947617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.620 [2024-12-05 20:49:33.947622] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.620 [2024-12-05 20:49:33.959747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.620 [2024-12-05 20:49:33.960134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.620 [2024-12-05 20:49:33.960149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.620 [2024-12-05 20:49:33.960156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.620 [2024-12-05 20:49:33.960339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.620 [2024-12-05 20:49:33.960510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.620 [2024-12-05 20:49:33.960518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.620 [2024-12-05 20:49:33.960523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.620 [2024-12-05 20:49:33.960528] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.620 [2024-12-05 20:49:33.972713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.620 [2024-12-05 20:49:33.973210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.620 [2024-12-05 20:49:33.973255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.620 [2024-12-05 20:49:33.973278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.620 [2024-12-05 20:49:33.973889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.620 [2024-12-05 20:49:33.974074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.620 [2024-12-05 20:49:33.974083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.620 [2024-12-05 20:49:33.974088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.620 [2024-12-05 20:49:33.974094] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.620 [2024-12-05 20:49:33.985731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.620 [2024-12-05 20:49:33.986183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.620 [2024-12-05 20:49:33.986198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.620 [2024-12-05 20:49:33.986205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.620 [2024-12-05 20:49:33.986386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.620 [2024-12-05 20:49:33.986556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.620 [2024-12-05 20:49:33.986563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.620 [2024-12-05 20:49:33.986569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.620 [2024-12-05 20:49:33.986574] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.620 [2024-12-05 20:49:33.998910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.620 [2024-12-05 20:49:33.999233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.620 [2024-12-05 20:49:33.999249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.620 [2024-12-05 20:49:33.999256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.620 [2024-12-05 20:49:33.999433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.620 [2024-12-05 20:49:33.999616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.620 [2024-12-05 20:49:33.999623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.620 [2024-12-05 20:49:33.999629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.620 [2024-12-05 20:49:33.999635] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.620 [2024-12-05 20:49:34.011906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.620 [2024-12-05 20:49:34.012273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.620 [2024-12-05 20:49:34.012288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.620 [2024-12-05 20:49:34.012297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.620 [2024-12-05 20:49:34.012475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.620 [2024-12-05 20:49:34.012653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.620 [2024-12-05 20:49:34.012661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.620 [2024-12-05 20:49:34.012666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.620 [2024-12-05 20:49:34.012672] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.620 [2024-12-05 20:49:34.024878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.620 [2024-12-05 20:49:34.025241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.620 [2024-12-05 20:49:34.025256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.620 [2024-12-05 20:49:34.025263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.620 [2024-12-05 20:49:34.025441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.620 [2024-12-05 20:49:34.025620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.621 [2024-12-05 20:49:34.025627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.621 [2024-12-05 20:49:34.025633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.621 [2024-12-05 20:49:34.025639] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.621 [2024-12-05 20:49:34.037924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.621 [2024-12-05 20:49:34.038286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.621 [2024-12-05 20:49:34.038302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.621 [2024-12-05 20:49:34.038308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.621 [2024-12-05 20:49:34.038487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.621 [2024-12-05 20:49:34.038665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.621 [2024-12-05 20:49:34.038673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.621 [2024-12-05 20:49:34.038679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.621 [2024-12-05 20:49:34.038684] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.621 [2024-12-05 20:49:34.050890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.621 [2024-12-05 20:49:34.051305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.621 [2024-12-05 20:49:34.051323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.621 [2024-12-05 20:49:34.051330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.621 [2024-12-05 20:49:34.051513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.621 [2024-12-05 20:49:34.051700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.621 [2024-12-05 20:49:34.051708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.621 [2024-12-05 20:49:34.051714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.621 [2024-12-05 20:49:34.051719] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.882 [2024-12-05 20:49:34.064187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.882 [2024-12-05 20:49:34.064555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-12-05 20:49:34.064571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.882 [2024-12-05 20:49:34.064577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.882 [2024-12-05 20:49:34.064760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.882 [2024-12-05 20:49:34.064950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.882 [2024-12-05 20:49:34.064958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.882 [2024-12-05 20:49:34.064963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.882 [2024-12-05 20:49:34.064969] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.882 [2024-12-05 20:49:34.077325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.882 [2024-12-05 20:49:34.077757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-12-05 20:49:34.077773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.882 [2024-12-05 20:49:34.077780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.882 [2024-12-05 20:49:34.077963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.882 [2024-12-05 20:49:34.078150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.882 [2024-12-05 20:49:34.078159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.882 [2024-12-05 20:49:34.078165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.882 [2024-12-05 20:49:34.078171] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.882 [2024-12-05 20:49:34.090491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.882 [2024-12-05 20:49:34.090871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-12-05 20:49:34.090886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.882 [2024-12-05 20:49:34.090893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.882 [2024-12-05 20:49:34.091078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.882 [2024-12-05 20:49:34.091257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.882 [2024-12-05 20:49:34.091264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.882 [2024-12-05 20:49:34.091274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.882 [2024-12-05 20:49:34.091280] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.882 [2024-12-05 20:49:34.103453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.882 [2024-12-05 20:49:34.103883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-12-05 20:49:34.103927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.882 [2024-12-05 20:49:34.103951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.883 [2024-12-05 20:49:34.104471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.883 [2024-12-05 20:49:34.104651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.883 [2024-12-05 20:49:34.104659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.883 [2024-12-05 20:49:34.104664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.883 [2024-12-05 20:49:34.104670] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.883 [2024-12-05 20:49:34.116467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.883 [2024-12-05 20:49:34.117002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-12-05 20:49:34.117045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.883 [2024-12-05 20:49:34.117076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.883 [2024-12-05 20:49:34.117681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.883 [2024-12-05 20:49:34.118138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.883 [2024-12-05 20:49:34.118156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.883 [2024-12-05 20:49:34.118171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.883 [2024-12-05 20:49:34.118184] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.883 [2024-12-05 20:49:34.132196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.883 [2024-12-05 20:49:34.132764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-12-05 20:49:34.132786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.883 [2024-12-05 20:49:34.132796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.883 [2024-12-05 20:49:34.133094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.883 [2024-12-05 20:49:34.133388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.883 [2024-12-05 20:49:34.133400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.883 [2024-12-05 20:49:34.133409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.883 [2024-12-05 20:49:34.133418] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.883 [2024-12-05 20:49:34.145579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.883 [2024-12-05 20:49:34.146042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-12-05 20:49:34.146096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.883 [2024-12-05 20:49:34.146119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.883 [2024-12-05 20:49:34.146571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.883 [2024-12-05 20:49:34.146765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.883 [2024-12-05 20:49:34.146773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.883 [2024-12-05 20:49:34.146779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.883 [2024-12-05 20:49:34.146785] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.883 [2024-12-05 20:49:34.158561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.883 [2024-12-05 20:49:34.158996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-12-05 20:49:34.159011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.883 [2024-12-05 20:49:34.159018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.883 [2024-12-05 20:49:34.159201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.883 [2024-12-05 20:49:34.159380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.883 [2024-12-05 20:49:34.159387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.883 [2024-12-05 20:49:34.159393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.883 [2024-12-05 20:49:34.159398] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.883 [2024-12-05 20:49:34.171599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.883 [2024-12-05 20:49:34.172051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-12-05 20:49:34.172070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.883 [2024-12-05 20:49:34.172077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.883 [2024-12-05 20:49:34.172256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.883 [2024-12-05 20:49:34.172434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.883 [2024-12-05 20:49:34.172442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.883 [2024-12-05 20:49:34.172448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.883 [2024-12-05 20:49:34.172453] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.883 [2024-12-05 20:49:34.184573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.883 [2024-12-05 20:49:34.185001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-12-05 20:49:34.185016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.883 [2024-12-05 20:49:34.185026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.883 [2024-12-05 20:49:34.185209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.883 [2024-12-05 20:49:34.185387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.883 [2024-12-05 20:49:34.185395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.883 [2024-12-05 20:49:34.185400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.883 [2024-12-05 20:49:34.185406] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.883 [2024-12-05 20:49:34.197677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.883 [2024-12-05 20:49:34.198106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-12-05 20:49:34.198121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.883 [2024-12-05 20:49:34.198128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.883 [2024-12-05 20:49:34.198306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.883 [2024-12-05 20:49:34.198485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.883 [2024-12-05 20:49:34.198493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.883 [2024-12-05 20:49:34.198498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.883 [2024-12-05 20:49:34.198504] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.883 [2024-12-05 20:49:34.210602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.883 [2024-12-05 20:49:34.211009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-12-05 20:49:34.211023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.883 [2024-12-05 20:49:34.211029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.883 [2024-12-05 20:49:34.211224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.883 [2024-12-05 20:49:34.211403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.883 [2024-12-05 20:49:34.211410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.883 [2024-12-05 20:49:34.211416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.883 [2024-12-05 20:49:34.211422] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.883 [2024-12-05 20:49:34.223522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.883 [2024-12-05 20:49:34.223950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-12-05 20:49:34.223965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.883 [2024-12-05 20:49:34.223972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.883 [2024-12-05 20:49:34.224158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.883 [2024-12-05 20:49:34.224340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.884 [2024-12-05 20:49:34.224348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.884 [2024-12-05 20:49:34.224354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.884 [2024-12-05 20:49:34.224359] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.884 [2024-12-05 20:49:34.236447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.884 [2024-12-05 20:49:34.236851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-12-05 20:49:34.236866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.884 [2024-12-05 20:49:34.236872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.884 [2024-12-05 20:49:34.237041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.884 [2024-12-05 20:49:34.237239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.884 [2024-12-05 20:49:34.237247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.884 [2024-12-05 20:49:34.237253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.884 [2024-12-05 20:49:34.237259] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.884 [2024-12-05 20:49:34.249602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.884 [2024-12-05 20:49:34.250033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-12-05 20:49:34.250048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.884 [2024-12-05 20:49:34.250055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.884 [2024-12-05 20:49:34.250239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.884 [2024-12-05 20:49:34.250417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.884 [2024-12-05 20:49:34.250425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.884 [2024-12-05 20:49:34.250431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.884 [2024-12-05 20:49:34.250436] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.884 [2024-12-05 20:49:34.262554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.884 [2024-12-05 20:49:34.262988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-12-05 20:49:34.263033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.884 [2024-12-05 20:49:34.263057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.884 [2024-12-05 20:49:34.263582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.884 [2024-12-05 20:49:34.263761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.884 [2024-12-05 20:49:34.263768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.884 [2024-12-05 20:49:34.263777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.884 [2024-12-05 20:49:34.263783] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.884 [2024-12-05 20:49:34.275555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.884 [2024-12-05 20:49:34.275967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-12-05 20:49:34.275982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.884 [2024-12-05 20:49:34.275989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.884 [2024-12-05 20:49:34.276174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.884 [2024-12-05 20:49:34.276353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.884 [2024-12-05 20:49:34.276360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.884 [2024-12-05 20:49:34.276366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.884 [2024-12-05 20:49:34.276371] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.884 [2024-12-05 20:49:34.288625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.884 [2024-12-05 20:49:34.288984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-12-05 20:49:34.289000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.884 [2024-12-05 20:49:34.289006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.884 [2024-12-05 20:49:34.289203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.884 [2024-12-05 20:49:34.289383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.884 [2024-12-05 20:49:34.289391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.884 [2024-12-05 20:49:34.289397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.884 [2024-12-05 20:49:34.289402] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.884 [2024-12-05 20:49:34.301720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.884 [2024-12-05 20:49:34.302178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-12-05 20:49:34.302195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.884 [2024-12-05 20:49:34.302202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.884 [2024-12-05 20:49:34.302385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.884 [2024-12-05 20:49:34.302568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.884 [2024-12-05 20:49:34.302577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.884 [2024-12-05 20:49:34.302582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.884 [2024-12-05 20:49:34.302588] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.884 [2024-12-05 20:49:34.315067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.884 [2024-12-05 20:49:34.315517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-12-05 20:49:34.315533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:40.884 [2024-12-05 20:49:34.315540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:40.884 [2024-12-05 20:49:34.315723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:40.884 [2024-12-05 20:49:34.315909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.884 [2024-12-05 20:49:34.315917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.884 [2024-12-05 20:49:34.315924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.884 [2024-12-05 20:49:34.315931] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.145 [2024-12-05 20:49:34.328203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.145 [2024-12-05 20:49:34.328638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.145 [2024-12-05 20:49:34.328653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.145 [2024-12-05 20:49:34.328660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.145 [2024-12-05 20:49:34.328839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.145 [2024-12-05 20:49:34.329018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.145 [2024-12-05 20:49:34.329025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.145 [2024-12-05 20:49:34.329031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.145 [2024-12-05 20:49:34.329037] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.145 [2024-12-05 20:49:34.341263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.145 [2024-12-05 20:49:34.341725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.145 [2024-12-05 20:49:34.341769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.145 [2024-12-05 20:49:34.341792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.145 [2024-12-05 20:49:34.342474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.145 [2024-12-05 20:49:34.343065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.146 [2024-12-05 20:49:34.343073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.146 [2024-12-05 20:49:34.343079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.146 [2024-12-05 20:49:34.343085] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.146 [2024-12-05 20:49:34.354292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.146 [2024-12-05 20:49:34.354720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.146 [2024-12-05 20:49:34.354736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.146 [2024-12-05 20:49:34.354746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.146 [2024-12-05 20:49:34.354924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.146 [2024-12-05 20:49:34.355109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.146 [2024-12-05 20:49:34.355117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.146 [2024-12-05 20:49:34.355124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.146 [2024-12-05 20:49:34.355129] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.146 [2024-12-05 20:49:34.367234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.146 [2024-12-05 20:49:34.367651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.146 [2024-12-05 20:49:34.367695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.146 [2024-12-05 20:49:34.367719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.146 [2024-12-05 20:49:34.368212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.146 [2024-12-05 20:49:34.368392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.146 [2024-12-05 20:49:34.368400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.146 [2024-12-05 20:49:34.368406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.146 [2024-12-05 20:49:34.368411] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.146 [2024-12-05 20:49:34.380189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.146 [2024-12-05 20:49:34.380633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.146 [2024-12-05 20:49:34.380677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.146 [2024-12-05 20:49:34.380700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.146 [2024-12-05 20:49:34.381322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.146 [2024-12-05 20:49:34.381772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.146 [2024-12-05 20:49:34.381789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.146 [2024-12-05 20:49:34.381803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.146 [2024-12-05 20:49:34.381816] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.146 [2024-12-05 20:49:34.395664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.146 [2024-12-05 20:49:34.396202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.146 [2024-12-05 20:49:34.396224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.146 [2024-12-05 20:49:34.396234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.146 [2024-12-05 20:49:34.396526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.146 [2024-12-05 20:49:34.396824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.146 [2024-12-05 20:49:34.396836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.146 [2024-12-05 20:49:34.396845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.146 [2024-12-05 20:49:34.396854] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.146 [2024-12-05 20:49:34.409139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.146 [2024-12-05 20:49:34.409553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.146 [2024-12-05 20:49:34.409569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.146 [2024-12-05 20:49:34.409576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.146 [2024-12-05 20:49:34.409775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.146 [2024-12-05 20:49:34.409975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.146 [2024-12-05 20:49:34.409983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.146 [2024-12-05 20:49:34.409989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.146 [2024-12-05 20:49:34.409995] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.146 [2024-12-05 20:49:34.422164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.146 [2024-12-05 20:49:34.422587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.146 [2024-12-05 20:49:34.422602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.146 [2024-12-05 20:49:34.422608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.146 [2024-12-05 20:49:34.422786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.146 [2024-12-05 20:49:34.422964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.146 [2024-12-05 20:49:34.422972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.146 [2024-12-05 20:49:34.422977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.146 [2024-12-05 20:49:34.422983] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.146 [2024-12-05 20:49:34.435306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.146 [2024-12-05 20:49:34.435724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.146 [2024-12-05 20:49:34.435739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.146 [2024-12-05 20:49:34.435746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.146 [2024-12-05 20:49:34.435924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.146 [2024-12-05 20:49:34.436116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.146 [2024-12-05 20:49:34.436124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.146 [2024-12-05 20:49:34.436133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.146 [2024-12-05 20:49:34.436139] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.146 [2024-12-05 20:49:34.448225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.146 [2024-12-05 20:49:34.448626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.146 [2024-12-05 20:49:34.448642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.146 [2024-12-05 20:49:34.448648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.146 [2024-12-05 20:49:34.448826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.146 [2024-12-05 20:49:34.449005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.146 [2024-12-05 20:49:34.449013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.146 [2024-12-05 20:49:34.449019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.146 [2024-12-05 20:49:34.449024] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.146 [2024-12-05 20:49:34.461289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.146 [2024-12-05 20:49:34.461718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.146 [2024-12-05 20:49:34.461733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.146 [2024-12-05 20:49:34.461740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.146 [2024-12-05 20:49:34.461918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.146 [2024-12-05 20:49:34.462103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.146 [2024-12-05 20:49:34.462111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.146 [2024-12-05 20:49:34.462117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.146 [2024-12-05 20:49:34.462123] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.147 [2024-12-05 20:49:34.474221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.147 [2024-12-05 20:49:34.474660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.147 [2024-12-05 20:49:34.474675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.147 [2024-12-05 20:49:34.474681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.147 [2024-12-05 20:49:34.474850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.147 [2024-12-05 20:49:34.475020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.147 [2024-12-05 20:49:34.475027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.147 [2024-12-05 20:49:34.475033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.147 [2024-12-05 20:49:34.475038] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.147 [2024-12-05 20:49:34.487154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.147 [2024-12-05 20:49:34.487590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.147 [2024-12-05 20:49:34.487633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.147 [2024-12-05 20:49:34.487657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.147 [2024-12-05 20:49:34.488170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.147 [2024-12-05 20:49:34.488350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.147 [2024-12-05 20:49:34.488357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.147 [2024-12-05 20:49:34.488363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.147 [2024-12-05 20:49:34.488368] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.147 [2024-12-05 20:49:34.500242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.147 [2024-12-05 20:49:34.500671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.147 [2024-12-05 20:49:34.500686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.147 [2024-12-05 20:49:34.500693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.147 [2024-12-05 20:49:34.500871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.147 [2024-12-05 20:49:34.501050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.147 [2024-12-05 20:49:34.501063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.147 [2024-12-05 20:49:34.501070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.147 [2024-12-05 20:49:34.501076] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.147 [2024-12-05 20:49:34.513169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.147 [2024-12-05 20:49:34.513593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.147 [2024-12-05 20:49:34.513637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.147 [2024-12-05 20:49:34.513660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.147 [2024-12-05 20:49:34.514304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.147 [2024-12-05 20:49:34.514475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.147 [2024-12-05 20:49:34.514482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.147 [2024-12-05 20:49:34.514487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.147 [2024-12-05 20:49:34.514492] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.147 [2024-12-05 20:49:34.526187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.147 [2024-12-05 20:49:34.526593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.147 [2024-12-05 20:49:34.526608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.147 [2024-12-05 20:49:34.526616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.147 [2024-12-05 20:49:34.526786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.147 [2024-12-05 20:49:34.526956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.147 [2024-12-05 20:49:34.526963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.147 [2024-12-05 20:49:34.526969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.147 [2024-12-05 20:49:34.526974] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.147 [2024-12-05 20:49:34.539164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.147 [2024-12-05 20:49:34.539540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.147 [2024-12-05 20:49:34.539554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.147 [2024-12-05 20:49:34.539560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.147 [2024-12-05 20:49:34.539730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.147 [2024-12-05 20:49:34.539899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.147 [2024-12-05 20:49:34.539906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.147 [2024-12-05 20:49:34.539912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.147 [2024-12-05 20:49:34.539917] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.147 [2024-12-05 20:49:34.552181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.147 [2024-12-05 20:49:34.552608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.147 [2024-12-05 20:49:34.552624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.147 [2024-12-05 20:49:34.552631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.147 [2024-12-05 20:49:34.552814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.147 [2024-12-05 20:49:34.552998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.147 [2024-12-05 20:49:34.553006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.147 [2024-12-05 20:49:34.553012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.147 [2024-12-05 20:49:34.553018] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.147 [2024-12-05 20:49:34.565340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.147 [2024-12-05 20:49:34.565795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.147 [2024-12-05 20:49:34.565839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.147 [2024-12-05 20:49:34.565862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.147 [2024-12-05 20:49:34.566472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.147 [2024-12-05 20:49:34.566654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.147 [2024-12-05 20:49:34.566662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.147 [2024-12-05 20:49:34.566668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.147 [2024-12-05 20:49:34.566673] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.147 [2024-12-05 20:49:34.578540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.147 [2024-12-05 20:49:34.579006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.147 [2024-12-05 20:49:34.579021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.147 [2024-12-05 20:49:34.579028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.147 [2024-12-05 20:49:34.579229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.147 [2024-12-05 20:49:34.579408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.147 [2024-12-05 20:49:34.579416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.147 [2024-12-05 20:49:34.579422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.147 [2024-12-05 20:49:34.579428] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.408 [2024-12-05 20:49:34.591718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.408 [2024-12-05 20:49:34.592136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-12-05 20:49:34.592151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.408 [2024-12-05 20:49:34.592158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.408 [2024-12-05 20:49:34.592337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.408 [2024-12-05 20:49:34.592520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.408 [2024-12-05 20:49:34.592528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.408 [2024-12-05 20:49:34.592534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.408 [2024-12-05 20:49:34.592539] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.408 [2024-12-05 20:49:34.604692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.408 [2024-12-05 20:49:34.605107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-12-05 20:49:34.605123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.408 [2024-12-05 20:49:34.605130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.408 [2024-12-05 20:49:34.605309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.408 [2024-12-05 20:49:34.605489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.408 [2024-12-05 20:49:34.605497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.408 [2024-12-05 20:49:34.605506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.408 [2024-12-05 20:49:34.605512] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.409 [2024-12-05 20:49:34.617809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.409 [2024-12-05 20:49:34.618236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-12-05 20:49:34.618251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.409 [2024-12-05 20:49:34.618258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.409 [2024-12-05 20:49:34.618436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.409 [2024-12-05 20:49:34.618615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.409 [2024-12-05 20:49:34.618622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.409 [2024-12-05 20:49:34.618628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.409 [2024-12-05 20:49:34.618634] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.409 [2024-12-05 20:49:34.630750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.409 [2024-12-05 20:49:34.631206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-12-05 20:49:34.631221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.409 [2024-12-05 20:49:34.631227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.409 [2024-12-05 20:49:34.631405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.409 [2024-12-05 20:49:34.631585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.409 [2024-12-05 20:49:34.631592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.409 [2024-12-05 20:49:34.631598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.409 [2024-12-05 20:49:34.631604] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.409 [2024-12-05 20:49:34.643744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.409 [2024-12-05 20:49:34.644189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-12-05 20:49:34.644233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.409 [2024-12-05 20:49:34.644257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.409 [2024-12-05 20:49:34.644810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.409 [2024-12-05 20:49:34.644981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.409 [2024-12-05 20:49:34.644988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.409 [2024-12-05 20:49:34.644993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.409 [2024-12-05 20:49:34.644999] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.409 [2024-12-05 20:49:34.656727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.409 [2024-12-05 20:49:34.657134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-12-05 20:49:34.657150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.409 [2024-12-05 20:49:34.657156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.409 [2024-12-05 20:49:34.657335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.409 [2024-12-05 20:49:34.657515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.409 [2024-12-05 20:49:34.657523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.409 [2024-12-05 20:49:34.657528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.409 [2024-12-05 20:49:34.657534] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.409 [2024-12-05 20:49:34.669737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.409 [2024-12-05 20:49:34.670179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-12-05 20:49:34.670195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.409 [2024-12-05 20:49:34.670202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.409 [2024-12-05 20:49:34.670383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.409 [2024-12-05 20:49:34.670553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.409 [2024-12-05 20:49:34.670561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.409 [2024-12-05 20:49:34.670566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.409 [2024-12-05 20:49:34.670572] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.409 [2024-12-05 20:49:34.682664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.409 [2024-12-05 20:49:34.683032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-12-05 20:49:34.683047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.409 [2024-12-05 20:49:34.683054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.409 [2024-12-05 20:49:34.683236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.409 [2024-12-05 20:49:34.683414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.409 [2024-12-05 20:49:34.683422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.409 [2024-12-05 20:49:34.683428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.409 [2024-12-05 20:49:34.683433] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.409 [2024-12-05 20:49:34.695629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.409 [2024-12-05 20:49:34.696031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-12-05 20:49:34.696046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.409 [2024-12-05 20:49:34.696056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.409 [2024-12-05 20:49:34.696242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.409 [2024-12-05 20:49:34.696420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.409 [2024-12-05 20:49:34.696428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.409 [2024-12-05 20:49:34.696434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.409 [2024-12-05 20:49:34.696439] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.409 [2024-12-05 20:49:34.708661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.409 [2024-12-05 20:49:34.709010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-12-05 20:49:34.709025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.409 [2024-12-05 20:49:34.709031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.409 [2024-12-05 20:49:34.709229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.409 [2024-12-05 20:49:34.709408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.409 [2024-12-05 20:49:34.709415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.409 [2024-12-05 20:49:34.709421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.409 [2024-12-05 20:49:34.709427] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.409 [2024-12-05 20:49:34.721612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.409 [2024-12-05 20:49:34.722019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-12-05 20:49:34.722034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.409 [2024-12-05 20:49:34.722041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.409 [2024-12-05 20:49:34.722238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.409 [2024-12-05 20:49:34.722419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.409 [2024-12-05 20:49:34.722426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.409 [2024-12-05 20:49:34.722432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.409 [2024-12-05 20:49:34.722438] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.409 [2024-12-05 20:49:34.734545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.409 [2024-12-05 20:49:34.734958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-12-05 20:49:34.735002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.409 [2024-12-05 20:49:34.735025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.410 [2024-12-05 20:49:34.735648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.410 [2024-12-05 20:49:34.736111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.410 [2024-12-05 20:49:34.736129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.410 [2024-12-05 20:49:34.736144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.410 [2024-12-05 20:49:34.736157] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.410 [2024-12-05 20:49:34.749987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.410 [2024-12-05 20:49:34.750560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-12-05 20:49:34.750582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.410 [2024-12-05 20:49:34.750593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.410 [2024-12-05 20:49:34.751234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.410 [2024-12-05 20:49:34.751528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.410 [2024-12-05 20:49:34.751540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.410 [2024-12-05 20:49:34.751549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.410 [2024-12-05 20:49:34.751558] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.410 [2024-12-05 20:49:34.763433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.410 [2024-12-05 20:49:34.763813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-12-05 20:49:34.763829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.410 [2024-12-05 20:49:34.763837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.410 [2024-12-05 20:49:34.764030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.410 [2024-12-05 20:49:34.764232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.410 [2024-12-05 20:49:34.764240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.410 [2024-12-05 20:49:34.764247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.410 [2024-12-05 20:49:34.764253] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.410 [2024-12-05 20:49:34.776508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.410 [2024-12-05 20:49:34.776915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-12-05 20:49:34.776930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.410 [2024-12-05 20:49:34.776936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.410 [2024-12-05 20:49:34.777129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.410 [2024-12-05 20:49:34.777307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.410 [2024-12-05 20:49:34.777315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.410 [2024-12-05 20:49:34.777324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.410 [2024-12-05 20:49:34.777330] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.410 [2024-12-05 20:49:34.789436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.410 [2024-12-05 20:49:34.789875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-12-05 20:49:34.789891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.410 [2024-12-05 20:49:34.789898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.410 [2024-12-05 20:49:34.790081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.410 [2024-12-05 20:49:34.790261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.410 [2024-12-05 20:49:34.790268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.410 [2024-12-05 20:49:34.790274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.410 [2024-12-05 20:49:34.790279] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.410 [2024-12-05 20:49:34.802396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.410 [2024-12-05 20:49:34.802849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-12-05 20:49:34.802864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.410 [2024-12-05 20:49:34.802871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.410 [2024-12-05 20:49:34.803049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.410 [2024-12-05 20:49:34.803254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.410 [2024-12-05 20:49:34.803262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.410 [2024-12-05 20:49:34.803268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.410 [2024-12-05 20:49:34.803273] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.410 [2024-12-05 20:49:34.815509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.410 [2024-12-05 20:49:34.815876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-12-05 20:49:34.815891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.410 [2024-12-05 20:49:34.815898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.410 [2024-12-05 20:49:34.816081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.410 [2024-12-05 20:49:34.816265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.410 [2024-12-05 20:49:34.816273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.410 [2024-12-05 20:49:34.816280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.410 [2024-12-05 20:49:34.816285] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.410 6573.40 IOPS, 25.68 MiB/s [2024-12-05T19:49:34.851Z] [2024-12-05 20:49:34.828942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.410 [2024-12-05 20:49:34.829289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-12-05 20:49:34.829305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.410 [2024-12-05 20:49:34.829313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.410 [2024-12-05 20:49:34.829491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.410 [2024-12-05 20:49:34.829671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.410 [2024-12-05 20:49:34.829678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.410 [2024-12-05 20:49:34.829684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.410 [2024-12-05 20:49:34.829690] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.410 [2024-12-05 20:49:34.841982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.410 [2024-12-05 20:49:34.842400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-12-05 20:49:34.842416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.410 [2024-12-05 20:49:34.842422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.410 [2024-12-05 20:49:34.842601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.410 [2024-12-05 20:49:34.842780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.410 [2024-12-05 20:49:34.842788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.410 [2024-12-05 20:49:34.842794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.410 [2024-12-05 20:49:34.842800] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.690 [2024-12-05 20:49:34.855212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.691 [2024-12-05 20:49:34.855656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-12-05 20:49:34.855671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.691 [2024-12-05 20:49:34.855678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.691 [2024-12-05 20:49:34.856279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.691 [2024-12-05 20:49:34.856464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.691 [2024-12-05 20:49:34.856472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.691 [2024-12-05 20:49:34.856479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.691 [2024-12-05 20:49:34.856484] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.691 [2024-12-05 20:49:34.868340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.691 [2024-12-05 20:49:34.868820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-12-05 20:49:34.868862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.691 [2024-12-05 20:49:34.868894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.691 [2024-12-05 20:49:34.869416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.691 [2024-12-05 20:49:34.869596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.691 [2024-12-05 20:49:34.869603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.691 [2024-12-05 20:49:34.869609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.691 [2024-12-05 20:49:34.869615] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.691 [2024-12-05 20:49:34.881277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.691 [2024-12-05 20:49:34.881750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-12-05 20:49:34.881794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.691 [2024-12-05 20:49:34.881818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.691 [2024-12-05 20:49:34.882368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.691 [2024-12-05 20:49:34.882539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.691 [2024-12-05 20:49:34.882546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.691 [2024-12-05 20:49:34.882552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.691 [2024-12-05 20:49:34.882557] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.691 [2024-12-05 20:49:34.894334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.691 [2024-12-05 20:49:34.894788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-12-05 20:49:34.894804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.691 [2024-12-05 20:49:34.894811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.691 [2024-12-05 20:49:34.894989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.691 [2024-12-05 20:49:34.895172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.691 [2024-12-05 20:49:34.895180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.691 [2024-12-05 20:49:34.895186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.691 [2024-12-05 20:49:34.895192] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.691 [2024-12-05 20:49:34.907312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.691 [2024-12-05 20:49:34.907738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-12-05 20:49:34.907754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.691 [2024-12-05 20:49:34.907760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.691 [2024-12-05 20:49:34.907938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.691 [2024-12-05 20:49:34.908126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.691 [2024-12-05 20:49:34.908135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.691 [2024-12-05 20:49:34.908140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.691 [2024-12-05 20:49:34.908146] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.691 [2024-12-05 20:49:34.920261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.691 [2024-12-05 20:49:34.920699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-12-05 20:49:34.920714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.691 [2024-12-05 20:49:34.920721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.691 [2024-12-05 20:49:34.920899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.691 [2024-12-05 20:49:34.921084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.691 [2024-12-05 20:49:34.921092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.691 [2024-12-05 20:49:34.921098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.691 [2024-12-05 20:49:34.921104] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.691 [2024-12-05 20:49:34.933195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.691 [2024-12-05 20:49:34.933630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-12-05 20:49:34.933645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.691 [2024-12-05 20:49:34.933679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.691 [2024-12-05 20:49:34.934308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.691 [2024-12-05 20:49:34.934479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.691 [2024-12-05 20:49:34.934486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.691 [2024-12-05 20:49:34.934492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.691 [2024-12-05 20:49:34.934497] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.691 [2024-12-05 20:49:34.946187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.691 [2024-12-05 20:49:34.946524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-12-05 20:49:34.946539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.691 [2024-12-05 20:49:34.946545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.691 [2024-12-05 20:49:34.946715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.691 [2024-12-05 20:49:34.946884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.691 [2024-12-05 20:49:34.946892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.691 [2024-12-05 20:49:34.946904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.691 [2024-12-05 20:49:34.946910] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.691 [2024-12-05 20:49:34.959111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.691 [2024-12-05 20:49:34.959477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-12-05 20:49:34.959522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.691 [2024-12-05 20:49:34.959544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.691 [2024-12-05 20:49:34.960228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.691 [2024-12-05 20:49:34.960656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.691 [2024-12-05 20:49:34.960663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.691 [2024-12-05 20:49:34.960669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.691 [2024-12-05 20:49:34.960675] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.691 [2024-12-05 20:49:34.972134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.692 [2024-12-05 20:49:34.972583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-12-05 20:49:34.972627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.692 [2024-12-05 20:49:34.972650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.692 [2024-12-05 20:49:34.973301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.692 [2024-12-05 20:49:34.973480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.692 [2024-12-05 20:49:34.973488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.692 [2024-12-05 20:49:34.973494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.692 [2024-12-05 20:49:34.973499] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.692 [2024-12-05 20:49:34.985135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.692 [2024-12-05 20:49:34.985570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-12-05 20:49:34.985584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.692 [2024-12-05 20:49:34.985618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.692 [2024-12-05 20:49:34.986302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.692 [2024-12-05 20:49:34.986530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.692 [2024-12-05 20:49:34.986537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.692 [2024-12-05 20:49:34.986543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.692 [2024-12-05 20:49:34.986548] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.692 [2024-12-05 20:49:34.998087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.692 [2024-12-05 20:49:34.998552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-12-05 20:49:34.998567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.692 [2024-12-05 20:49:34.998574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.692 [2024-12-05 20:49:34.998758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.692 [2024-12-05 20:49:34.998929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.692 [2024-12-05 20:49:34.998936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.692 [2024-12-05 20:49:34.998942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.692 [2024-12-05 20:49:34.998947] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.692 [2024-12-05 20:49:35.011110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.692 [2024-12-05 20:49:35.011546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-12-05 20:49:35.011561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.692 [2024-12-05 20:49:35.011597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.692 [2024-12-05 20:49:35.012281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.692 [2024-12-05 20:49:35.012501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.692 [2024-12-05 20:49:35.012509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.692 [2024-12-05 20:49:35.012514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.692 [2024-12-05 20:49:35.012520] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.692 [2024-12-05 20:49:35.024161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.692 [2024-12-05 20:49:35.024597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-12-05 20:49:35.024612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.692 [2024-12-05 20:49:35.024646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.692 [2024-12-05 20:49:35.025330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.692 [2024-12-05 20:49:35.025598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.692 [2024-12-05 20:49:35.025606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.692 [2024-12-05 20:49:35.025611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.692 [2024-12-05 20:49:35.025617] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.692 [2024-12-05 20:49:35.037229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.692 [2024-12-05 20:49:35.037667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-12-05 20:49:35.037701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.692 [2024-12-05 20:49:35.037734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.692 [2024-12-05 20:49:35.038418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.692 [2024-12-05 20:49:35.038630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.692 [2024-12-05 20:49:35.038637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.692 [2024-12-05 20:49:35.038643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.692 [2024-12-05 20:49:35.038648] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.692 [2024-12-05 20:49:35.050273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.692 [2024-12-05 20:49:35.050729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-12-05 20:49:35.050744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.692 [2024-12-05 20:49:35.050751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.692 [2024-12-05 20:49:35.050929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.692 [2024-12-05 20:49:35.051113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.692 [2024-12-05 20:49:35.051121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.692 [2024-12-05 20:49:35.051127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.692 [2024-12-05 20:49:35.051132] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.692 [2024-12-05 20:49:35.063239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.692 [2024-12-05 20:49:35.063692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-12-05 20:49:35.063707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.692 [2024-12-05 20:49:35.063714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.692 [2024-12-05 20:49:35.063892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.692 [2024-12-05 20:49:35.064074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.692 [2024-12-05 20:49:35.064081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.692 [2024-12-05 20:49:35.064087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.692 [2024-12-05 20:49:35.064108] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.692 [2024-12-05 20:49:35.076412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.692 [2024-12-05 20:49:35.076857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-12-05 20:49:35.076901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.693 [2024-12-05 20:49:35.076924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.693 [2024-12-05 20:49:35.077607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.693 [2024-12-05 20:49:35.078010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.693 [2024-12-05 20:49:35.078017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.693 [2024-12-05 20:49:35.078023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.693 [2024-12-05 20:49:35.078028] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.693 [2024-12-05 20:49:35.089401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.693 [2024-12-05 20:49:35.089803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.693 [2024-12-05 20:49:35.089818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.693 [2024-12-05 20:49:35.089824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.693 [2024-12-05 20:49:35.089993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.693 [2024-12-05 20:49:35.090187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.693 [2024-12-05 20:49:35.090195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.693 [2024-12-05 20:49:35.090200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.693 [2024-12-05 20:49:35.090206] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.693 [2024-12-05 20:49:35.102333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.693 [2024-12-05 20:49:35.102761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.693 [2024-12-05 20:49:35.102776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.693 [2024-12-05 20:49:35.102782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.693 [2024-12-05 20:49:35.102951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.693 [2024-12-05 20:49:35.103145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.693 [2024-12-05 20:49:35.103153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.693 [2024-12-05 20:49:35.103159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.693 [2024-12-05 20:49:35.103164] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.693 [2024-12-05 20:49:35.115275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.693 [2024-12-05 20:49:35.115651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.693 [2024-12-05 20:49:35.115666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.693 [2024-12-05 20:49:35.115672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.693 [2024-12-05 20:49:35.115841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.693 [2024-12-05 20:49:35.116011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.693 [2024-12-05 20:49:35.116018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.693 [2024-12-05 20:49:35.116027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.693 [2024-12-05 20:49:35.116033] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.693 [2024-12-05 20:49:35.128353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.953 [2024-12-05 20:49:35.128800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.953 [2024-12-05 20:49:35.128817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.953 [2024-12-05 20:49:35.128824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.953 [2024-12-05 20:49:35.129007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.953 [2024-12-05 20:49:35.129202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.953 [2024-12-05 20:49:35.129210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.953 [2024-12-05 20:49:35.129216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.953 [2024-12-05 20:49:35.129221] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.953 [2024-12-05 20:49:35.141460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.953 [2024-12-05 20:49:35.141879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.953 [2024-12-05 20:49:35.141895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.953 [2024-12-05 20:49:35.141901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.953 [2024-12-05 20:49:35.142085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.953 [2024-12-05 20:49:35.142264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.953 [2024-12-05 20:49:35.142272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.953 [2024-12-05 20:49:35.142278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.953 [2024-12-05 20:49:35.142283] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.953 [2024-12-05 20:49:35.154457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.953 [2024-12-05 20:49:35.154924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.953 [2024-12-05 20:49:35.154968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.953 [2024-12-05 20:49:35.154991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.953 [2024-12-05 20:49:35.155675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.953 [2024-12-05 20:49:35.156144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.953 [2024-12-05 20:49:35.156151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.953 [2024-12-05 20:49:35.156157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.953 [2024-12-05 20:49:35.156163] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.953 [2024-12-05 20:49:35.167466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.953 [2024-12-05 20:49:35.167894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.953 [2024-12-05 20:49:35.167910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.953 [2024-12-05 20:49:35.167916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.953 [2024-12-05 20:49:35.168113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.953 [2024-12-05 20:49:35.168293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.953 [2024-12-05 20:49:35.168301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.953 [2024-12-05 20:49:35.168306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.953 [2024-12-05 20:49:35.168312] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.953 [2024-12-05 20:49:35.180409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.953 [2024-12-05 20:49:35.180870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.953 [2024-12-05 20:49:35.180885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.953 [2024-12-05 20:49:35.180892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.953 [2024-12-05 20:49:35.181075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.953 [2024-12-05 20:49:35.181254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.953 [2024-12-05 20:49:35.181262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.953 [2024-12-05 20:49:35.181268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.953 [2024-12-05 20:49:35.181273] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.953 [2024-12-05 20:49:35.193455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.953 [2024-12-05 20:49:35.193799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.953 [2024-12-05 20:49:35.193815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.953 [2024-12-05 20:49:35.193823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.953 [2024-12-05 20:49:35.194002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.953 [2024-12-05 20:49:35.194186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.953 [2024-12-05 20:49:35.194194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.953 [2024-12-05 20:49:35.194200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.953 [2024-12-05 20:49:35.194205] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.953 [2024-12-05 20:49:35.206424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.953 [2024-12-05 20:49:35.206878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.953 [2024-12-05 20:49:35.206894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.953 [2024-12-05 20:49:35.206903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.953 [2024-12-05 20:49:35.207086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.953 [2024-12-05 20:49:35.207265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.953 [2024-12-05 20:49:35.207272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.953 [2024-12-05 20:49:35.207278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.953 [2024-12-05 20:49:35.207283] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.953 [2024-12-05 20:49:35.219400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.953 [2024-12-05 20:49:35.219857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.953 [2024-12-05 20:49:35.219873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.953 [2024-12-05 20:49:35.219880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.953 [2024-12-05 20:49:35.220064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.954 [2024-12-05 20:49:35.220243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.954 [2024-12-05 20:49:35.220251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.954 [2024-12-05 20:49:35.220256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.954 [2024-12-05 20:49:35.220264] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.954 [2024-12-05 20:49:35.232476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.954 [2024-12-05 20:49:35.232828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.954 [2024-12-05 20:49:35.232844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.954 [2024-12-05 20:49:35.232850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.954 [2024-12-05 20:49:35.233028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.954 [2024-12-05 20:49:35.233212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.954 [2024-12-05 20:49:35.233220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.954 [2024-12-05 20:49:35.233226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.954 [2024-12-05 20:49:35.233231] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.954 [2024-12-05 20:49:35.245497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.954 [2024-12-05 20:49:35.245912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.954 [2024-12-05 20:49:35.245955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.954 [2024-12-05 20:49:35.245978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.954 [2024-12-05 20:49:35.246660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.954 [2024-12-05 20:49:35.247216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.954 [2024-12-05 20:49:35.247224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.954 [2024-12-05 20:49:35.247230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.954 [2024-12-05 20:49:35.247235] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.954 [2024-12-05 20:49:35.258832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.954 [2024-12-05 20:49:35.259231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.954 [2024-12-05 20:49:35.259248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.954 [2024-12-05 20:49:35.259255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.954 [2024-12-05 20:49:35.259438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.954 [2024-12-05 20:49:35.259623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.954 [2024-12-05 20:49:35.259631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.954 [2024-12-05 20:49:35.259637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.954 [2024-12-05 20:49:35.259642] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.954 [2024-12-05 20:49:35.271879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.954 [2024-12-05 20:49:35.272297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.954 [2024-12-05 20:49:35.272342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.954 [2024-12-05 20:49:35.272366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.954 [2024-12-05 20:49:35.272984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.954 [2024-12-05 20:49:35.273168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.954 [2024-12-05 20:49:35.273176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.954 [2024-12-05 20:49:35.273182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.954 [2024-12-05 20:49:35.273188] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.954 [2024-12-05 20:49:35.285016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.954 [2024-12-05 20:49:35.285393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.954 [2024-12-05 20:49:35.285410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.954 [2024-12-05 20:49:35.285417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.954 [2024-12-05 20:49:35.285596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.954 [2024-12-05 20:49:35.285774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.954 [2024-12-05 20:49:35.285782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.954 [2024-12-05 20:49:35.285791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.954 [2024-12-05 20:49:35.285797] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.954 [2024-12-05 20:49:35.298031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.954 [2024-12-05 20:49:35.298466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.954 [2024-12-05 20:49:35.298482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.954 [2024-12-05 20:49:35.298489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.954 [2024-12-05 20:49:35.298668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.954 [2024-12-05 20:49:35.298847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.954 [2024-12-05 20:49:35.298854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.954 [2024-12-05 20:49:35.298860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.954 [2024-12-05 20:49:35.298865] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.954 [2024-12-05 20:49:35.311082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.954 [2024-12-05 20:49:35.311474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.954 [2024-12-05 20:49:35.311489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.954 [2024-12-05 20:49:35.311496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.954 [2024-12-05 20:49:35.311674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.954 [2024-12-05 20:49:35.311853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.954 [2024-12-05 20:49:35.311861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.954 [2024-12-05 20:49:35.311866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.954 [2024-12-05 20:49:35.311872] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.954 [2024-12-05 20:49:35.324229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.954 [2024-12-05 20:49:35.324592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.954 [2024-12-05 20:49:35.324608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.954 [2024-12-05 20:49:35.324614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.954 [2024-12-05 20:49:35.324797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.955 [2024-12-05 20:49:35.324981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.955 [2024-12-05 20:49:35.324989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.955 [2024-12-05 20:49:35.324995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.955 [2024-12-05 20:49:35.325001] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.955 [2024-12-05 20:49:35.337507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.955 [2024-12-05 20:49:35.337830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.955 [2024-12-05 20:49:35.337845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.955 [2024-12-05 20:49:35.337852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.955 [2024-12-05 20:49:35.338035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.955 [2024-12-05 20:49:35.338225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.955 [2024-12-05 20:49:35.338233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.955 [2024-12-05 20:49:35.338239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.955 [2024-12-05 20:49:35.338245] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.955 [2024-12-05 20:49:35.350740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.955 [2024-12-05 20:49:35.351038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.955 [2024-12-05 20:49:35.351054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.955 [2024-12-05 20:49:35.351066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.955 [2024-12-05 20:49:35.351250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.955 [2024-12-05 20:49:35.351437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.955 [2024-12-05 20:49:35.351445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.955 [2024-12-05 20:49:35.351451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.955 [2024-12-05 20:49:35.351456] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.955 [2024-12-05 20:49:35.363958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.955 [2024-12-05 20:49:35.364244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.955 [2024-12-05 20:49:35.364259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.955 [2024-12-05 20:49:35.364266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.955 [2024-12-05 20:49:35.364449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.955 [2024-12-05 20:49:35.364633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.955 [2024-12-05 20:49:35.364641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.955 [2024-12-05 20:49:35.364647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.955 [2024-12-05 20:49:35.364653] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.955 [2024-12-05 20:49:35.377150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.955 [2024-12-05 20:49:35.377514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.955 [2024-12-05 20:49:35.377557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.955 [2024-12-05 20:49:35.377589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.955 [2024-12-05 20:49:35.378273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.955 [2024-12-05 20:49:35.378491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.955 [2024-12-05 20:49:35.378499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.955 [2024-12-05 20:49:35.378504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.955 [2024-12-05 20:49:35.378510] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.955 [2024-12-05 20:49:35.390368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.955 [2024-12-05 20:49:35.390730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.955 [2024-12-05 20:49:35.390746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:41.955 [2024-12-05 20:49:35.390752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:41.955 [2024-12-05 20:49:35.390935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:41.955 [2024-12-05 20:49:35.391124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.955 [2024-12-05 20:49:35.391132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.955 [2024-12-05 20:49:35.391138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.955 [2024-12-05 20:49:35.391144] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.215 [2024-12-05 20:49:35.403513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.215 [2024-12-05 20:49:35.403820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.215 [2024-12-05 20:49:35.403835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.215 [2024-12-05 20:49:35.403842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.215 [2024-12-05 20:49:35.404021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.215 [2024-12-05 20:49:35.404205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.215 [2024-12-05 20:49:35.404214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.215 [2024-12-05 20:49:35.404219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.215 [2024-12-05 20:49:35.404226] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.215 [2024-12-05 20:49:35.416508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.215 [2024-12-05 20:49:35.416936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.215 [2024-12-05 20:49:35.416951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.215 [2024-12-05 20:49:35.416958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.215 [2024-12-05 20:49:35.417140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.215 [2024-12-05 20:49:35.417322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.215 [2024-12-05 20:49:35.417330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.215 [2024-12-05 20:49:35.417336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.215 [2024-12-05 20:49:35.417341] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.215 [2024-12-05 20:49:35.429460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.215 [2024-12-05 20:49:35.429807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.215 [2024-12-05 20:49:35.429822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.215 [2024-12-05 20:49:35.429829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.215 [2024-12-05 20:49:35.430007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.215 [2024-12-05 20:49:35.430191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.215 [2024-12-05 20:49:35.430199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.215 [2024-12-05 20:49:35.430204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.215 [2024-12-05 20:49:35.430210] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.215 [2024-12-05 20:49:35.442490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.215 [2024-12-05 20:49:35.442899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.215 [2024-12-05 20:49:35.442943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.215 [2024-12-05 20:49:35.442966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.215 [2024-12-05 20:49:35.443516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.215 [2024-12-05 20:49:35.443695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.215 [2024-12-05 20:49:35.443703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.215 [2024-12-05 20:49:35.443709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.215 [2024-12-05 20:49:35.443714] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.215 [2024-12-05 20:49:35.455441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.215 [2024-12-05 20:49:35.455791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.215 [2024-12-05 20:49:35.455806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.215 [2024-12-05 20:49:35.455813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.215 [2024-12-05 20:49:35.455991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.215 [2024-12-05 20:49:35.456176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.215 [2024-12-05 20:49:35.456185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.215 [2024-12-05 20:49:35.456194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.215 [2024-12-05 20:49:35.456199] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.215 [2024-12-05 20:49:35.468502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.215 [2024-12-05 20:49:35.468792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.215 [2024-12-05 20:49:35.468808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.215 [2024-12-05 20:49:35.468815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.215 [2024-12-05 20:49:35.468995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.215 [2024-12-05 20:49:35.469179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.215 [2024-12-05 20:49:35.469189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.215 [2024-12-05 20:49:35.469194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.215 [2024-12-05 20:49:35.469200] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.215 [2024-12-05 20:49:35.481555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.215 [2024-12-05 20:49:35.481953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.215 [2024-12-05 20:49:35.481968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.215 [2024-12-05 20:49:35.481974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.215 [2024-12-05 20:49:35.482157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.215 [2024-12-05 20:49:35.482336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.215 [2024-12-05 20:49:35.482343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.215 [2024-12-05 20:49:35.482349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.216 [2024-12-05 20:49:35.482355] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.216 [2024-12-05 20:49:35.494576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.216 [2024-12-05 20:49:35.494974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.216 [2024-12-05 20:49:35.494989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.216 [2024-12-05 20:49:35.494995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.216 [2024-12-05 20:49:35.495179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.216 [2024-12-05 20:49:35.495357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.216 [2024-12-05 20:49:35.495365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.216 [2024-12-05 20:49:35.495371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.216 [2024-12-05 20:49:35.495376] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.216 [2024-12-05 20:49:35.507503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.216 [2024-12-05 20:49:35.507839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.216 [2024-12-05 20:49:35.507853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.216 [2024-12-05 20:49:35.507860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.216 [2024-12-05 20:49:35.508038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.216 [2024-12-05 20:49:35.508225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.216 [2024-12-05 20:49:35.508233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.216 [2024-12-05 20:49:35.508239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.216 [2024-12-05 20:49:35.508245] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 528911 Killed "${NVMF_APP[@]}" "$@" 00:29:42.216 20:49:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:42.216 20:49:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:42.216 20:49:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:42.216 20:49:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:42.216 20:49:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:42.216 20:49:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=530422 00:29:42.216 20:49:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 530422 00:29:42.216 20:49:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:42.216 [2024-12-05 20:49:35.520734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.216 20:49:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 530422 ']' 00:29:42.216 [2024-12-05 20:49:35.521180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.216 [2024-12-05 20:49:35.521196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.216 [2024-12-05 20:49:35.521204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.216 20:49:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.216 [2024-12-05 20:49:35.521388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.216 [2024-12-05 20:49:35.521572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.216 [2024-12-05 20:49:35.521580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.216 [2024-12-05 20:49:35.521586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.216 [2024-12-05 20:49:35.521592] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.216 20:49:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:42.216 20:49:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.216 20:49:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:42.216 20:49:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:42.216 [2024-12-05 20:49:35.533911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.216 [2024-12-05 20:49:35.534263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.216 [2024-12-05 20:49:35.534279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.216 [2024-12-05 20:49:35.534286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.216 [2024-12-05 20:49:35.534469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.216 [2024-12-05 20:49:35.534652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.216 [2024-12-05 20:49:35.534659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.216 [2024-12-05 20:49:35.534665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.216 [2024-12-05 20:49:35.534671] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.216 [2024-12-05 20:49:35.547170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.216 [2024-12-05 20:49:35.547514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.216 [2024-12-05 20:49:35.547530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.216 [2024-12-05 20:49:35.547536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.216 [2024-12-05 20:49:35.547719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.216 [2024-12-05 20:49:35.547902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.216 [2024-12-05 20:49:35.547910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.216 [2024-12-05 20:49:35.547916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.216 [2024-12-05 20:49:35.547921] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.216 [2024-12-05 20:49:35.560413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.216 [2024-12-05 20:49:35.560799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.216 [2024-12-05 20:49:35.560814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.216 [2024-12-05 20:49:35.560821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.216 [2024-12-05 20:49:35.561004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.216 [2024-12-05 20:49:35.561193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.216 [2024-12-05 20:49:35.561202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.216 [2024-12-05 20:49:35.561208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.216 [2024-12-05 20:49:35.561214] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.216 [2024-12-05 20:49:35.568823] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:29:42.216 [2024-12-05 20:49:35.568860] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:42.216 [2024-12-05 20:49:35.573575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.216 [2024-12-05 20:49:35.573873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.216 [2024-12-05 20:49:35.573890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.216 [2024-12-05 20:49:35.573897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.216 [2024-12-05 20:49:35.574085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.216 [2024-12-05 20:49:35.574269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.216 [2024-12-05 20:49:35.574277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.216 [2024-12-05 20:49:35.574284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.216 [2024-12-05 20:49:35.574290] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.216 [2024-12-05 20:49:35.586875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.216 [2024-12-05 20:49:35.587226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.216 [2024-12-05 20:49:35.587244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.216 [2024-12-05 20:49:35.587251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.216 [2024-12-05 20:49:35.587435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.217 [2024-12-05 20:49:35.587618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.217 [2024-12-05 20:49:35.587626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.217 [2024-12-05 20:49:35.587632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.217 [2024-12-05 20:49:35.587639] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.217 [2024-12-05 20:49:35.600142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.217 [2024-12-05 20:49:35.600562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.217 [2024-12-05 20:49:35.600579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.217 [2024-12-05 20:49:35.600586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.217 [2024-12-05 20:49:35.600770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.217 [2024-12-05 20:49:35.600954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.217 [2024-12-05 20:49:35.600962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.217 [2024-12-05 20:49:35.600968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.217 [2024-12-05 20:49:35.600974] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.217 [2024-12-05 20:49:35.613462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.217 [2024-12-05 20:49:35.613840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.217 [2024-12-05 20:49:35.613855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.217 [2024-12-05 20:49:35.613862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.217 [2024-12-05 20:49:35.614045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.217 [2024-12-05 20:49:35.614234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.217 [2024-12-05 20:49:35.614242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.217 [2024-12-05 20:49:35.614248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.217 [2024-12-05 20:49:35.614254] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.217 [2024-12-05 20:49:35.626700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.217 [2024-12-05 20:49:35.627012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.217 [2024-12-05 20:49:35.627028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.217 [2024-12-05 20:49:35.627035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.217 [2024-12-05 20:49:35.627223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.217 [2024-12-05 20:49:35.627407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.217 [2024-12-05 20:49:35.627415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.217 [2024-12-05 20:49:35.627421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.217 [2024-12-05 20:49:35.627427] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.217 [2024-12-05 20:49:35.639798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.217 [2024-12-05 20:49:35.640100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.217 [2024-12-05 20:49:35.640117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.217 [2024-12-05 20:49:35.640123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.217 [2024-12-05 20:49:35.640307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.217 [2024-12-05 20:49:35.640491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.217 [2024-12-05 20:49:35.640499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.217 [2024-12-05 20:49:35.640505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.217 [2024-12-05 20:49:35.640511] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.217 [2024-12-05 20:49:35.642725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:42.217 [2024-12-05 20:49:35.653086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.217 [2024-12-05 20:49:35.653400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.217 [2024-12-05 20:49:35.653418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.217 [2024-12-05 20:49:35.653434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.217 [2024-12-05 20:49:35.653618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.477 [2024-12-05 20:49:35.653804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.477 [2024-12-05 20:49:35.653815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.477 [2024-12-05 20:49:35.653823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.477 [2024-12-05 20:49:35.653829] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.477 [2024-12-05 20:49:35.666301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.477 [2024-12-05 20:49:35.666650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.477 [2024-12-05 20:49:35.666666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.477 [2024-12-05 20:49:35.666673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.477 [2024-12-05 20:49:35.666856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.477 [2024-12-05 20:49:35.667041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.477 [2024-12-05 20:49:35.667050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.477 [2024-12-05 20:49:35.667056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.477 [2024-12-05 20:49:35.667068] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.477 [2024-12-05 20:49:35.679499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.477 [2024-12-05 20:49:35.679862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.477 [2024-12-05 20:49:35.679878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.477 [2024-12-05 20:49:35.679885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.477 [2024-12-05 20:49:35.680074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.477 [2024-12-05 20:49:35.680259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.477 [2024-12-05 20:49:35.680267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.477 [2024-12-05 20:49:35.680273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.477 [2024-12-05 20:49:35.680279] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.477 [2024-12-05 20:49:35.682761] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:42.477 [2024-12-05 20:49:35.682783] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:42.477 [2024-12-05 20:49:35.682790] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:42.478 [2024-12-05 20:49:35.682796] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:42.478 [2024-12-05 20:49:35.682800] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:42.478 [2024-12-05 20:49:35.684102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:42.478 [2024-12-05 20:49:35.684157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.478 [2024-12-05 20:49:35.684159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:42.478 [2024-12-05 20:49:35.692776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.478 [2024-12-05 20:49:35.693224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.478 [2024-12-05 20:49:35.693244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.478 [2024-12-05 20:49:35.693254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.478 [2024-12-05 20:49:35.693441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.478 [2024-12-05 20:49:35.693627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.478 [2024-12-05 20:49:35.693636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.478 [2024-12-05 20:49:35.693644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.478 [2024-12-05 20:49:35.693650] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.478 [2024-12-05 20:49:35.705979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.478 [2024-12-05 20:49:35.706445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.478 [2024-12-05 20:49:35.706465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.478 [2024-12-05 20:49:35.706474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.478 [2024-12-05 20:49:35.706660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.478 [2024-12-05 20:49:35.706846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.478 [2024-12-05 20:49:35.706855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.478 [2024-12-05 20:49:35.706862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.478 [2024-12-05 20:49:35.706868] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.478 [2024-12-05 20:49:35.719210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.478 [2024-12-05 20:49:35.719691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.478 [2024-12-05 20:49:35.719710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.478 [2024-12-05 20:49:35.719720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.478 [2024-12-05 20:49:35.719906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.478 [2024-12-05 20:49:35.720096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.478 [2024-12-05 20:49:35.720104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.478 [2024-12-05 20:49:35.720112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.478 [2024-12-05 20:49:35.720119] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.478 [2024-12-05 20:49:35.732429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.478 [2024-12-05 20:49:35.732890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.478 [2024-12-05 20:49:35.732909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.478 [2024-12-05 20:49:35.732919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.478 [2024-12-05 20:49:35.733110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.478 [2024-12-05 20:49:35.733298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.478 [2024-12-05 20:49:35.733306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.478 [2024-12-05 20:49:35.733314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.478 [2024-12-05 20:49:35.733320] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.478 [2024-12-05 20:49:35.745629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.478 [2024-12-05 20:49:35.746107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.478 [2024-12-05 20:49:35.746126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.478 [2024-12-05 20:49:35.746135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.478 [2024-12-05 20:49:35.746321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.478 [2024-12-05 20:49:35.746507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.478 [2024-12-05 20:49:35.746515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.478 [2024-12-05 20:49:35.746523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.478 [2024-12-05 20:49:35.746529] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.478 [2024-12-05 20:49:35.758848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.478 [2024-12-05 20:49:35.759231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.478 [2024-12-05 20:49:35.759247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.478 [2024-12-05 20:49:35.759255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.478 [2024-12-05 20:49:35.759438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.478 [2024-12-05 20:49:35.759622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.478 [2024-12-05 20:49:35.759630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.478 [2024-12-05 20:49:35.759637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.478 [2024-12-05 20:49:35.759643] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.478 [2024-12-05 20:49:35.772133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.478 [2024-12-05 20:49:35.772580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.478 [2024-12-05 20:49:35.772596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.478 [2024-12-05 20:49:35.772607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.478 [2024-12-05 20:49:35.772791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.478 [2024-12-05 20:49:35.772976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.478 [2024-12-05 20:49:35.772984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.478 [2024-12-05 20:49:35.772990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.478 [2024-12-05 20:49:35.772996] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.478 [2024-12-05 20:49:35.785302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.478 [2024-12-05 20:49:35.785744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.478 [2024-12-05 20:49:35.785760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.478 [2024-12-05 20:49:35.785768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.478 [2024-12-05 20:49:35.785951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.478 [2024-12-05 20:49:35.786142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.478 [2024-12-05 20:49:35.786150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.478 [2024-12-05 20:49:35.786157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.478 [2024-12-05 20:49:35.786163] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.478 [2024-12-05 20:49:35.798459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.478 [2024-12-05 20:49:35.798882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.478 [2024-12-05 20:49:35.798898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.478 [2024-12-05 20:49:35.798905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.478 [2024-12-05 20:49:35.799092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.478 [2024-12-05 20:49:35.799275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.478 [2024-12-05 20:49:35.799283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.478 [2024-12-05 20:49:35.799289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.479 [2024-12-05 20:49:35.799295] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.479 [2024-12-05 20:49:35.811759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.479 [2024-12-05 20:49:35.812180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.479 [2024-12-05 20:49:35.812196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.479 [2024-12-05 20:49:35.812203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.479 [2024-12-05 20:49:35.812386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.479 [2024-12-05 20:49:35.812570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.479 [2024-12-05 20:49:35.812580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.479 [2024-12-05 20:49:35.812586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.479 [2024-12-05 20:49:35.812592] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.479 [2024-12-05 20:49:35.825079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.479 [2024-12-05 20:49:35.825500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.479 [2024-12-05 20:49:35.825516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.479 [2024-12-05 20:49:35.825523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.479 [2024-12-05 20:49:35.825706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.479 [2024-12-05 20:49:35.825889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.479 [2024-12-05 20:49:35.825897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.479 [2024-12-05 20:49:35.825903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.479 [2024-12-05 20:49:35.825909] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.479 5477.83 IOPS, 21.40 MiB/s [2024-12-05T19:49:35.920Z] [2024-12-05 20:49:35.838318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.479 [2024-12-05 20:49:35.838789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.479 [2024-12-05 20:49:35.838805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.479 [2024-12-05 20:49:35.838812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.479 [2024-12-05 20:49:35.838995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.479 [2024-12-05 20:49:35.839183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.479 [2024-12-05 20:49:35.839191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.479 [2024-12-05 20:49:35.839197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.479 [2024-12-05 20:49:35.839202] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.479 [2024-12-05 20:49:35.851522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.479 [2024-12-05 20:49:35.851943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.479 [2024-12-05 20:49:35.851959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.479 [2024-12-05 20:49:35.851966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.479 [2024-12-05 20:49:35.852153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.479 [2024-12-05 20:49:35.852337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.479 [2024-12-05 20:49:35.852345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.479 [2024-12-05 20:49:35.852354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.479 [2024-12-05 20:49:35.852360] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.479 [2024-12-05 20:49:35.864832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.479 [2024-12-05 20:49:35.865255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.479 [2024-12-05 20:49:35.865271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.479 [2024-12-05 20:49:35.865278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.479 [2024-12-05 20:49:35.865461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.479 [2024-12-05 20:49:35.865644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.479 [2024-12-05 20:49:35.865652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.479 [2024-12-05 20:49:35.865658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.479 [2024-12-05 20:49:35.865663] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.479 [2024-12-05 20:49:35.878134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.479 [2024-12-05 20:49:35.878581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.479 [2024-12-05 20:49:35.878596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.479 [2024-12-05 20:49:35.878603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.479 [2024-12-05 20:49:35.878785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.479 [2024-12-05 20:49:35.878970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.479 [2024-12-05 20:49:35.878977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.479 [2024-12-05 20:49:35.878983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.479 [2024-12-05 20:49:35.878989] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.479 [2024-12-05 20:49:35.891282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.479 [2024-12-05 20:49:35.891735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.479 [2024-12-05 20:49:35.891750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.479 [2024-12-05 20:49:35.891756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.479 [2024-12-05 20:49:35.891940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.479 [2024-12-05 20:49:35.892127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.479 [2024-12-05 20:49:35.892135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.479 [2024-12-05 20:49:35.892141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.479 [2024-12-05 20:49:35.892147] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.479 [2024-12-05 20:49:35.904443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.479 [2024-12-05 20:49:35.904886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.479 [2024-12-05 20:49:35.904901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.479 [2024-12-05 20:49:35.904908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.479 [2024-12-05 20:49:35.905094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.479 [2024-12-05 20:49:35.905279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.479 [2024-12-05 20:49:35.905287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.479 [2024-12-05 20:49:35.905293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.479 [2024-12-05 20:49:35.905299] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.741 [2024-12-05 20:49:35.917631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.741 [2024-12-05 20:49:35.918069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.741 [2024-12-05 20:49:35.918085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.741 [2024-12-05 20:49:35.918093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.741 [2024-12-05 20:49:35.918277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.741 [2024-12-05 20:49:35.918460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.741 [2024-12-05 20:49:35.918468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.741 [2024-12-05 20:49:35.918474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.741 [2024-12-05 20:49:35.918480] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.741 [2024-12-05 20:49:35.930784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.741 [2024-12-05 20:49:35.931150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.741 [2024-12-05 20:49:35.931166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.741 [2024-12-05 20:49:35.931173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.741 [2024-12-05 20:49:35.931357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.741 [2024-12-05 20:49:35.931541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.741 [2024-12-05 20:49:35.931549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.741 [2024-12-05 20:49:35.931555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.741 [2024-12-05 20:49:35.931561] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.741 [2024-12-05 20:49:35.944028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.741 [2024-12-05 20:49:35.944377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.741 [2024-12-05 20:49:35.944392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.741 [2024-12-05 20:49:35.944403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.741 [2024-12-05 20:49:35.944586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.741 [2024-12-05 20:49:35.944770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.741 [2024-12-05 20:49:35.944778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.741 [2024-12-05 20:49:35.944784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.741 [2024-12-05 20:49:35.944789] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.741 [2024-12-05 20:49:35.957283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.741 [2024-12-05 20:49:35.957733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.741 [2024-12-05 20:49:35.957748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.741 [2024-12-05 20:49:35.957755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.741 [2024-12-05 20:49:35.957938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.741 [2024-12-05 20:49:35.958125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.741 [2024-12-05 20:49:35.958133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.741 [2024-12-05 20:49:35.958139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.741 [2024-12-05 20:49:35.958146] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.741 [2024-12-05 20:49:35.970446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.741 [2024-12-05 20:49:35.970865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.741 [2024-12-05 20:49:35.970881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.741 [2024-12-05 20:49:35.970888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.741 [2024-12-05 20:49:35.971074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.741 [2024-12-05 20:49:35.971259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.741 [2024-12-05 20:49:35.971266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.741 [2024-12-05 20:49:35.971272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.741 [2024-12-05 20:49:35.971278] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.741 [2024-12-05 20:49:35.983739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.741 [2024-12-05 20:49:35.984179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.741 [2024-12-05 20:49:35.984195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.741 [2024-12-05 20:49:35.984202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.741 [2024-12-05 20:49:35.984385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.741 [2024-12-05 20:49:35.984571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.741 [2024-12-05 20:49:35.984579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.741 [2024-12-05 20:49:35.984585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.741 [2024-12-05 20:49:35.984591] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.741 [2024-12-05 20:49:35.996886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.741 [2024-12-05 20:49:35.997228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.741 [2024-12-05 20:49:35.997243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.741 [2024-12-05 20:49:35.997250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.741 [2024-12-05 20:49:35.997433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.741 [2024-12-05 20:49:35.997615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.741 [2024-12-05 20:49:35.997623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.741 [2024-12-05 20:49:35.997629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.741 [2024-12-05 20:49:35.997635] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.741 [2024-12-05 20:49:36.010100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.741 [2024-12-05 20:49:36.010492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.741 [2024-12-05 20:49:36.010508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.741 [2024-12-05 20:49:36.010514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.741 [2024-12-05 20:49:36.010698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.741 [2024-12-05 20:49:36.010881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.741 [2024-12-05 20:49:36.010889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.741 [2024-12-05 20:49:36.010895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.741 [2024-12-05 20:49:36.010900] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.741 [2024-12-05 20:49:36.023373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.741 [2024-12-05 20:49:36.023797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.741 [2024-12-05 20:49:36.023813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.741 [2024-12-05 20:49:36.023820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.741 [2024-12-05 20:49:36.024003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.741 [2024-12-05 20:49:36.024189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.741 [2024-12-05 20:49:36.024197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.742 [2024-12-05 20:49:36.024207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.742 [2024-12-05 20:49:36.024213] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.742 [2024-12-05 20:49:36.036671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.742 [2024-12-05 20:49:36.037086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.742 [2024-12-05 20:49:36.037102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.742 [2024-12-05 20:49:36.037109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.742 [2024-12-05 20:49:36.037293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.742 [2024-12-05 20:49:36.037477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.742 [2024-12-05 20:49:36.037484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.742 [2024-12-05 20:49:36.037490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.742 [2024-12-05 20:49:36.037496] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.742 [2024-12-05 20:49:36.049960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.742 [2024-12-05 20:49:36.050308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.742 [2024-12-05 20:49:36.050325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.742 [2024-12-05 20:49:36.050332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.742 [2024-12-05 20:49:36.050515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.742 [2024-12-05 20:49:36.050697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.742 [2024-12-05 20:49:36.050705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.742 [2024-12-05 20:49:36.050711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.742 [2024-12-05 20:49:36.050717] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.742 [2024-12-05 20:49:36.063189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.742 [2024-12-05 20:49:36.063606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.742 [2024-12-05 20:49:36.063622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.742 [2024-12-05 20:49:36.063628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.742 [2024-12-05 20:49:36.063812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.742 [2024-12-05 20:49:36.063995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.742 [2024-12-05 20:49:36.064003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.742 [2024-12-05 20:49:36.064009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.742 [2024-12-05 20:49:36.064015] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.742 [2024-12-05 20:49:36.076487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.742 [2024-12-05 20:49:36.076919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.742 [2024-12-05 20:49:36.076934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.742 [2024-12-05 20:49:36.076941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.742 [2024-12-05 20:49:36.077128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.742 [2024-12-05 20:49:36.077317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.742 [2024-12-05 20:49:36.077325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.742 [2024-12-05 20:49:36.077331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.742 [2024-12-05 20:49:36.077336] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.742 [2024-12-05 20:49:36.089802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.742 [2024-12-05 20:49:36.090221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.742 [2024-12-05 20:49:36.090238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.742 [2024-12-05 20:49:36.090244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.742 [2024-12-05 20:49:36.090428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.742 [2024-12-05 20:49:36.090612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.742 [2024-12-05 20:49:36.090620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.742 [2024-12-05 20:49:36.090625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.742 [2024-12-05 20:49:36.090631] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.742 [2024-12-05 20:49:36.103103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.742 [2024-12-05 20:49:36.103522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.742 [2024-12-05 20:49:36.103538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.742 [2024-12-05 20:49:36.103544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.742 [2024-12-05 20:49:36.103727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.742 [2024-12-05 20:49:36.103911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.742 [2024-12-05 20:49:36.103919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.742 [2024-12-05 20:49:36.103925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.742 [2024-12-05 20:49:36.103931] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.742 [2024-12-05 20:49:36.116392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.742 [2024-12-05 20:49:36.116814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.742 [2024-12-05 20:49:36.116830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.742 [2024-12-05 20:49:36.116839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.742 [2024-12-05 20:49:36.117022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.742 [2024-12-05 20:49:36.117212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.742 [2024-12-05 20:49:36.117220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.742 [2024-12-05 20:49:36.117226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.742 [2024-12-05 20:49:36.117232] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.742 [2024-12-05 20:49:36.129693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.742 [2024-12-05 20:49:36.130113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.742 [2024-12-05 20:49:36.130128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.742 [2024-12-05 20:49:36.130135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.742 [2024-12-05 20:49:36.130317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.742 [2024-12-05 20:49:36.130502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.742 [2024-12-05 20:49:36.130510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.742 [2024-12-05 20:49:36.130516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.742 [2024-12-05 20:49:36.130521] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.742 [2024-12-05 20:49:36.142991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.742 [2024-12-05 20:49:36.143390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.742 [2024-12-05 20:49:36.143406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.742 [2024-12-05 20:49:36.143413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.742 [2024-12-05 20:49:36.143596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.742 [2024-12-05 20:49:36.143779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.742 [2024-12-05 20:49:36.143787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.742 [2024-12-05 20:49:36.143793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.742 [2024-12-05 20:49:36.143798] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.743 [2024-12-05 20:49:36.156261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.743 [2024-12-05 20:49:36.156697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.743 [2024-12-05 20:49:36.156712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.743 [2024-12-05 20:49:36.156719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.743 [2024-12-05 20:49:36.156902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.743 [2024-12-05 20:49:36.157099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.743 [2024-12-05 20:49:36.157108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.743 [2024-12-05 20:49:36.157114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.743 [2024-12-05 20:49:36.157119] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:42.743 [2024-12-05 20:49:36.169578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:42.743 [2024-12-05 20:49:36.169999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.743 [2024-12-05 20:49:36.170014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:42.743 [2024-12-05 20:49:36.170021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:42.743 [2024-12-05 20:49:36.170208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:42.743 [2024-12-05 20:49:36.170397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:42.743 [2024-12-05 20:49:36.170405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:42.743 [2024-12-05 20:49:36.170411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:42.743 [2024-12-05 20:49:36.170416] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:43.004 [2024-12-05 20:49:36.182883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:43.004 [2024-12-05 20:49:36.183321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.004 [2024-12-05 20:49:36.183338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:43.004 [2024-12-05 20:49:36.183345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:43.004 [2024-12-05 20:49:36.183528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:43.004 [2024-12-05 20:49:36.183713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:43.004 [2024-12-05 20:49:36.183721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:43.005 [2024-12-05 20:49:36.183727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:43.005 [2024-12-05 20:49:36.183733] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:43.005 [2024-12-05 20:49:36.196202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:43.005 [2024-12-05 20:49:36.196675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.005 [2024-12-05 20:49:36.196690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:43.005 [2024-12-05 20:49:36.196696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:43.005 [2024-12-05 20:49:36.196880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:43.005 [2024-12-05 20:49:36.197067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:43.005 [2024-12-05 20:49:36.197075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:43.005 [2024-12-05 20:49:36.197085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:43.005 [2024-12-05 20:49:36.197091] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:43.005 [2024-12-05 20:49:36.209393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:43.005 [2024-12-05 20:49:36.209839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.005 [2024-12-05 20:49:36.209854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:43.005 [2024-12-05 20:49:36.209861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:43.005 [2024-12-05 20:49:36.210044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:43.005 [2024-12-05 20:49:36.210231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:43.005 [2024-12-05 20:49:36.210240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:43.005 [2024-12-05 20:49:36.210245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:43.005 [2024-12-05 20:49:36.210251] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:43.005 [2024-12-05 20:49:36.222559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:43.005 [2024-12-05 20:49:36.223003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.005 [2024-12-05 20:49:36.223018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:43.005 [2024-12-05 20:49:36.223024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:43.005 [2024-12-05 20:49:36.223213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:43.005 [2024-12-05 20:49:36.223396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:43.005 [2024-12-05 20:49:36.223404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:43.005 [2024-12-05 20:49:36.223410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:43.005 [2024-12-05 20:49:36.223416] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:43.005 [2024-12-05 20:49:36.235713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:43.005 [2024-12-05 20:49:36.236159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.005 [2024-12-05 20:49:36.236176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:43.005 [2024-12-05 20:49:36.236183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:43.005 [2024-12-05 20:49:36.236367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:43.005 [2024-12-05 20:49:36.236550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:43.005 [2024-12-05 20:49:36.236558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:43.005 [2024-12-05 20:49:36.236564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:43.005 [2024-12-05 20:49:36.236570] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:43.005 [2024-12-05 20:49:36.248872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:43.005 [2024-12-05 20:49:36.249324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.005 [2024-12-05 20:49:36.249340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:43.005 [2024-12-05 20:49:36.249347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:43.005 [2024-12-05 20:49:36.249530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:43.005 [2024-12-05 20:49:36.249713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:43.005 [2024-12-05 20:49:36.249721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:43.005 [2024-12-05 20:49:36.249727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:43.005 [2024-12-05 20:49:36.249732] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:43.005 [2024-12-05 20:49:36.262038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:43.005 [2024-12-05 20:49:36.262465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.005 [2024-12-05 20:49:36.262481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:43.005 [2024-12-05 20:49:36.262487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:43.005 [2024-12-05 20:49:36.262670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:43.005 [2024-12-05 20:49:36.262853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:43.005 [2024-12-05 20:49:36.262861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:43.005 [2024-12-05 20:49:36.262866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:43.005 [2024-12-05 20:49:36.262872] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:43.005 [2024-12-05 20:49:36.275349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:43.005 [2024-12-05 20:49:36.275796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.005 [2024-12-05 20:49:36.275811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:43.005 [2024-12-05 20:49:36.275818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:43.006 [2024-12-05 20:49:36.276001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:43.006 [2024-12-05 20:49:36.276190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:43.006 [2024-12-05 20:49:36.276198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:43.006 [2024-12-05 20:49:36.276204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:43.006 [2024-12-05 20:49:36.276209] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:43.006 [2024-12-05 20:49:36.288517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:43.006 [2024-12-05 20:49:36.288971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.006 [2024-12-05 20:49:36.288988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:43.006 [2024-12-05 20:49:36.288998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:43.006 [2024-12-05 20:49:36.289186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:43.006 [2024-12-05 20:49:36.289369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:43.006 [2024-12-05 20:49:36.289377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:43.006 [2024-12-05 20:49:36.289383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:43.006 [2024-12-05 20:49:36.289389] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:43.006 [2024-12-05 20:49:36.301865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:43.006 [2024-12-05 20:49:36.302321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.006 [2024-12-05 20:49:36.302338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:43.006 [2024-12-05 20:49:36.302345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:43.006 [2024-12-05 20:49:36.302529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:43.006 [2024-12-05 20:49:36.302713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:43.006 [2024-12-05 20:49:36.302722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:43.006 [2024-12-05 20:49:36.302727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:43.006 [2024-12-05 20:49:36.302733] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:43.006 [2024-12-05 20:49:36.315033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:43.006 [2024-12-05 20:49:36.315461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.006 [2024-12-05 20:49:36.315477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:43.006 [2024-12-05 20:49:36.315484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:43.006 [2024-12-05 20:49:36.315667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:43.006 [2024-12-05 20:49:36.315851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:43.006 [2024-12-05 20:49:36.315859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:43.006 [2024-12-05 20:49:36.315865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:43.006 [2024-12-05 20:49:36.315871] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:43.006 [2024-12-05 20:49:36.328352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:43.006 [2024-12-05 20:49:36.328773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.006 [2024-12-05 20:49:36.328789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:43.006 [2024-12-05 20:49:36.328796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:43.006 [2024-12-05 20:49:36.328980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:43.006 [2024-12-05 20:49:36.329175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:43.006 [2024-12-05 20:49:36.329184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:43.006 [2024-12-05 20:49:36.329190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:43.006 [2024-12-05 20:49:36.329196] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:43.006 [2024-12-05 20:49:36.341665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:43.006 [2024-12-05 20:49:36.342114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.006 [2024-12-05 20:49:36.342131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:43.006 [2024-12-05 20:49:36.342138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:43.006 [2024-12-05 20:49:36.342321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:43.006 [2024-12-05 20:49:36.342505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:43.006 [2024-12-05 20:49:36.342514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:43.006 [2024-12-05 20:49:36.342519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:43.006 [2024-12-05 20:49:36.342525] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:43.006 [2024-12-05 20:49:36.354823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:43.006 [2024-12-05 20:49:36.355288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.006 [2024-12-05 20:49:36.355305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:43.006 [2024-12-05 20:49:36.355311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:43.006 [2024-12-05 20:49:36.355495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:43.006 [2024-12-05 20:49:36.355679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:43.006 [2024-12-05 20:49:36.355686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:43.006 [2024-12-05 20:49:36.355692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:43.006 [2024-12-05 20:49:36.355698] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:43.006 [2024-12-05 20:49:36.368018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:43.006 [2024-12-05 20:49:36.368418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.007 [2024-12-05 20:49:36.368433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:43.007 [2024-12-05 20:49:36.368440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:43.007 [2024-12-05 20:49:36.368623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:43.007 [2024-12-05 20:49:36.368806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:43.007 [2024-12-05 20:49:36.368814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:43.007 [2024-12-05 20:49:36.368820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:43.007 [2024-12-05 20:49:36.368830] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:43.007 [2024-12-05 20:49:36.381313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:43.007 [2024-12-05 20:49:36.381803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.007 [2024-12-05 20:49:36.381818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:43.007 [2024-12-05 20:49:36.381825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:43.007 [2024-12-05 20:49:36.382008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:43.007 20:49:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:43.007 [2024-12-05 20:49:36.382196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:43.007 [2024-12-05 20:49:36.382205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:43.007 [2024-12-05 20:49:36.382211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:43.007 [2024-12-05 20:49:36.382216] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:43.007 20:49:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:29:43.007 20:49:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:43.007 20:49:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:43.007 20:49:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:43.007 [2024-12-05 20:49:36.394523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:43.007 [2024-12-05 20:49:36.394964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.007 [2024-12-05 20:49:36.394980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:43.007 [2024-12-05 20:49:36.394988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:43.007 [2024-12-05 20:49:36.395175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:43.007 [2024-12-05 20:49:36.395358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:43.007 [2024-12-05 20:49:36.395366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:43.007 [2024-12-05 20:49:36.395372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:43.007 [2024-12-05 20:49:36.395377] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:43.007 [2024-12-05 20:49:36.407685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:43.007 [2024-12-05 20:49:36.408053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.007 [2024-12-05 20:49:36.408073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:43.007 [2024-12-05 20:49:36.408080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:43.007 [2024-12-05 20:49:36.408263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:43.007 [2024-12-05 20:49:36.408446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:43.007 [2024-12-05 20:49:36.408454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:43.007 [2024-12-05 20:49:36.408463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:43.007 [2024-12-05 20:49:36.408469] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:43.007 20:49:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:43.007 20:49:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:43.007 20:49:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.007 20:49:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:43.007 [2024-12-05 20:49:36.420954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:43.007 [2024-12-05 20:49:36.421353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.007 [2024-12-05 20:49:36.421369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:43.007 [2024-12-05 20:49:36.421375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:43.007 [2024-12-05 20:49:36.421563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:43.007 [2024-12-05 20:49:36.421747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:43.007 [2024-12-05 20:49:36.421755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:43.007 [2024-12-05 20:49:36.421761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:43.007 [2024-12-05 20:49:36.421767] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:43.007 [2024-12-05 20:49:36.424917] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:43.007 20:49:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.007 20:49:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:43.007 20:49:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.007 20:49:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:43.007 [2024-12-05 20:49:36.434122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:43.007 [2024-12-05 20:49:36.434489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.007 [2024-12-05 20:49:36.434505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:43.007 [2024-12-05 20:49:36.434512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:43.007 [2024-12-05 20:49:36.434695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:43.007 [2024-12-05 20:49:36.434879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:43.007 [2024-12-05 20:49:36.434887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:43.007 [2024-12-05 20:49:36.434893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:43.007 [2024-12-05 20:49:36.434898] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:43.268 [2024-12-05 20:49:36.447377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:43.268 [2024-12-05 20:49:36.447805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-12-05 20:49:36.447824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:43.268 [2024-12-05 20:49:36.447831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:43.268 [2024-12-05 20:49:36.448014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:43.268 [2024-12-05 20:49:36.448204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:43.268 [2024-12-05 20:49:36.448212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:43.268 [2024-12-05 20:49:36.448218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:43.268 [2024-12-05 20:49:36.448224] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:43.268 [2024-12-05 20:49:36.460538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:43.268 [2024-12-05 20:49:36.460960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-12-05 20:49:36.460976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:43.268 [2024-12-05 20:49:36.460983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:43.268 [2024-12-05 20:49:36.461170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:43.268 [2024-12-05 20:49:36.461354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:43.268 [2024-12-05 20:49:36.461362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:43.268 [2024-12-05 20:49:36.461368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:43.268 [2024-12-05 20:49:36.461374] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:43.268 Malloc0 00:29:43.268 20:49:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.268 20:49:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:43.268 20:49:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.268 20:49:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:43.268 [2024-12-05 20:49:36.473846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:43.268 [2024-12-05 20:49:36.474268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-12-05 20:49:36.474284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be0630 with addr=10.0.0.2, port=4420 00:29:43.268 [2024-12-05 20:49:36.474291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0630 is same with the state(6) to be set 00:29:43.268 [2024-12-05 20:49:36.474475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0630 (9): Bad file descriptor 00:29:43.268 [2024-12-05 20:49:36.474659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:43.268 [2024-12-05 20:49:36.474667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:43.268 [2024-12-05 20:49:36.474673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:43.268 [2024-12-05 20:49:36.474678] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:43.268 20:49:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.268 20:49:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:43.268 20:49:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.268 20:49:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:43.268 20:49:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.268 20:49:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:43.268 20:49:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.268 20:49:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:43.268 [2024-12-05 20:49:36.486648] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:43.268 [2024-12-05 20:49:36.487155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:43.268 20:49:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.268 20:49:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 529378 00:29:43.268 [2024-12-05 20:49:36.553749] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:29:44.463 5188.86 IOPS, 20.27 MiB/s [2024-12-05T19:49:38.841Z] 6124.38 IOPS, 23.92 MiB/s [2024-12-05T19:49:40.216Z] 6822.44 IOPS, 26.65 MiB/s [2024-12-05T19:49:41.151Z] 7407.00 IOPS, 28.93 MiB/s [2024-12-05T19:49:42.085Z] 7856.91 IOPS, 30.69 MiB/s [2024-12-05T19:49:43.020Z] 8242.75 IOPS, 32.20 MiB/s [2024-12-05T19:49:43.958Z] 8570.38 IOPS, 33.48 MiB/s [2024-12-05T19:49:44.894Z] 8838.07 IOPS, 34.52 MiB/s [2024-12-05T19:49:44.894Z] 9069.40 IOPS, 35.43 MiB/s 00:29:51.453 Latency(us) 00:29:51.453 [2024-12-05T19:49:44.894Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.453 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:51.453 Verification LBA range: start 0x0 length 0x4000 00:29:51.453 Nvme1n1 : 15.01 9072.15 35.44 12179.39 0.00 6003.18 480.35 17515.99 00:29:51.453 [2024-12-05T19:49:44.894Z] =================================================================================================================== 00:29:51.453 [2024-12-05T19:49:44.894Z] Total : 9072.15 35.44 12179.39 0.00 6003.18 480.35 17515.99 00:29:51.711 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:51.711 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:51.712 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.712 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:51.712 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.712 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:51.712 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:51.712 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:51.712 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:29:51.712 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:51.712 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:29:51.712 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:51.712 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:51.712 rmmod nvme_tcp 00:29:51.712 rmmod nvme_fabrics 00:29:51.712 rmmod nvme_keyring 00:29:51.712 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:51.712 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:29:51.712 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:29:51.712 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 530422 ']' 00:29:51.712 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 530422 00:29:51.712 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 530422 ']' 00:29:51.712 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 530422 00:29:51.712 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:29:51.712 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:51.712 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 530422 00:29:51.970 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:51.970 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:51.970 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 530422' 00:29:51.970 killing process with pid 530422 00:29:51.970 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 530422 00:29:51.970 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 530422 00:29:51.970 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:51.970 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:51.970 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:51.970 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:29:51.970 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:29:51.970 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:51.970 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:29:51.970 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:51.970 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:51.970 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:51.970 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:51.970 20:49:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:54.506 00:29:54.506 real 0m26.536s 00:29:54.506 user 1m2.512s 00:29:54.506 sys 0m6.726s 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:54.506 ************************************ 00:29:54.506 END TEST nvmf_bdevperf 00:29:54.506 ************************************ 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.506 ************************************ 00:29:54.506 START TEST nvmf_target_disconnect 00:29:54.506 ************************************ 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:54.506 * Looking for test storage... 00:29:54.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:54.506 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:54.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.507 --rc genhtml_branch_coverage=1 00:29:54.507 --rc genhtml_function_coverage=1 00:29:54.507 --rc genhtml_legend=1 00:29:54.507 --rc geninfo_all_blocks=1 00:29:54.507 --rc geninfo_unexecuted_blocks=1 00:29:54.507 00:29:54.507 ' 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:54.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.507 --rc genhtml_branch_coverage=1 00:29:54.507 --rc genhtml_function_coverage=1 00:29:54.507 --rc genhtml_legend=1 00:29:54.507 --rc geninfo_all_blocks=1 00:29:54.507 --rc geninfo_unexecuted_blocks=1 00:29:54.507 00:29:54.507 ' 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:54.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.507 --rc genhtml_branch_coverage=1 00:29:54.507 --rc genhtml_function_coverage=1 00:29:54.507 --rc genhtml_legend=1 00:29:54.507 --rc geninfo_all_blocks=1 00:29:54.507 --rc geninfo_unexecuted_blocks=1 00:29:54.507 00:29:54.507 ' 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:54.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.507 --rc genhtml_branch_coverage=1 00:29:54.507 --rc genhtml_function_coverage=1 00:29:54.507 --rc genhtml_legend=1 00:29:54.507 --rc geninfo_all_blocks=1 00:29:54.507 --rc geninfo_unexecuted_blocks=1 00:29:54.507 00:29:54.507 ' 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:54.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:29:54.507 20:49:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:01.077 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:01.077 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:30:01.077 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:01.077 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:01.077 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:01.078 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:01.078 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:01.078 Found net devices under 0000:af:00.0: cvl_0_0 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:01.078 Found net devices under 0000:af:00.1: cvl_0_1 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:01.078 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:01.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:01.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:30:01.079 00:30:01.079 --- 10.0.0.2 ping statistics --- 00:30:01.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.079 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:01.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:01.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:30:01.079 00:30:01.079 --- 10.0.0.1 ping statistics --- 00:30:01.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.079 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:01.079 ************************************ 00:30:01.079 START TEST nvmf_target_disconnect_tc1 00:30:01.079 ************************************ 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:01.079 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:01.079 [2024-12-05 20:49:53.798888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.079 [2024-12-05 20:49:53.798925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x180c470 with addr=10.0.0.2, port=4420 00:30:01.079 [2024-12-05 20:49:53.798944] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:01.079 [2024-12-05 20:49:53.798969] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:01.080 [2024-12-05 20:49:53.798974] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:30:01.080 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:01.080 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:01.080 Initializing NVMe Controllers 00:30:01.080 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:30:01.080 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:01.080 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:01.080 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:01.080 00:30:01.080 real 0m0.120s 00:30:01.080 user 0m0.048s 00:30:01.080 sys 0m0.071s 00:30:01.080 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:01.080 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:01.080 ************************************ 00:30:01.080 END TEST nvmf_target_disconnect_tc1 00:30:01.080 ************************************ 00:30:01.080 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:01.080 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:01.080 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:01.080 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:01.080 ************************************ 00:30:01.080 START TEST nvmf_target_disconnect_tc2 00:30:01.080 ************************************ 00:30:01.080 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:30:01.080 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:01.080 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:01.080 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:01.080 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:01.080 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:01.080 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=535852 00:30:01.080 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 535852 00:30:01.080 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:01.080 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 535852 ']' 00:30:01.080 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:01.080 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:01.080 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:01.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:01.080 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:01.080 20:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:01.080 [2024-12-05 20:49:53.935538] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:30:01.080 [2024-12-05 20:49:53.935575] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:01.080 [2024-12-05 20:49:53.990542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:01.080 [2024-12-05 20:49:54.029387] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:01.080 [2024-12-05 20:49:54.029423] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:01.080 [2024-12-05 20:49:54.029429] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:01.080 [2024-12-05 20:49:54.029437] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:01.080 [2024-12-05 20:49:54.029442] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:01.080 [2024-12-05 20:49:54.030890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:01.080 [2024-12-05 20:49:54.031004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:01.080 [2024-12-05 20:49:54.031115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:01.080 [2024-12-05 20:49:54.031117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:01.080 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:01.080 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:01.080 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:01.080 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:01.080 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:01.080 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:01.080 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:01.080 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.080 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:01.080 Malloc0 00:30:01.080 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.080 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:01.080 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.080 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:01.080 [2024-12-05 20:49:54.206291] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:01.080 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.080 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:01.081 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.081 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:01.081 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.081 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:01.081 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.081 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:01.081 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.081 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:01.081 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.081 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:01.081 [2024-12-05 20:49:54.235230] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:01.081 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.081 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:01.081 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.081 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:01.081 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.081 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=535879 00:30:01.081 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:01.081 20:49:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:02.996 20:49:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 535852 00:30:02.996 20:49:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Write completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Write completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Write completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Write completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Write completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Write completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Write completed with error (sct=0, sc=8) 00:30:02.996 [2024-12-05 20:49:56.262793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.996 starting I/O failed 00:30:02.996 Write completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Write completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Write completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Write completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Write completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Write completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Write completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Write completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Write completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Write completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Write completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Write completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Write completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Write completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 [2024-12-05 20:49:56.262990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Write completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Write completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Write completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Write completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Write completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Write completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Write completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Write completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Write completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Write completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.996 Read completed with error (sct=0, sc=8) 00:30:02.996 starting I/O failed 00:30:02.997 Read completed with error (sct=0, sc=8) 00:30:02.997 starting I/O failed 00:30:02.997 Read completed with error (sct=0, sc=8) 00:30:02.997 starting I/O failed 00:30:02.997 Read completed with error (sct=0, sc=8) 00:30:02.997 starting I/O failed 00:30:02.997 [2024-12-05 20:49:56.263170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.997 [2024-12-05 20:49:56.263442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.263464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.263556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.263565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.263795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.263804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.263980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.263989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.264051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.264064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.264239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.264248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.264408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.264418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.264571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.264581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.264669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.264677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.264867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.264876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.265033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.265073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.265334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.265366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.265498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.265539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.265779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.265791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.265923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.265932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.266084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.266094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.266243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.266253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.266326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.266335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.266513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.266523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.266588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.266597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.266721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.266730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.266793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.266802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.266942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.266951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.267097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.267107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.267246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.267255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.267387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.267396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.267476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.267485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.267589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.267598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.267680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.267688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.267741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.267750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.267815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.267823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.267897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.267906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.267989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.997 [2024-12-05 20:49:56.267998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.997 qpair failed and we were unable to recover it. 00:30:02.997 [2024-12-05 20:49:56.268149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.998 [2024-12-05 20:49:56.268158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.998 qpair failed and we were unable to recover it. 00:30:02.998 [2024-12-05 20:49:56.268233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.998 [2024-12-05 20:49:56.268241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.998 qpair failed and we were unable to recover it. 00:30:02.998 [2024-12-05 20:49:56.268325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.998 [2024-12-05 20:49:56.268334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.998 qpair failed and we were unable to recover it. 00:30:02.998 [2024-12-05 20:49:56.268492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.998 [2024-12-05 20:49:56.268500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.998 qpair failed and we were unable to recover it. 00:30:02.998 [2024-12-05 20:49:56.268575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.998 [2024-12-05 20:49:56.268583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.998 qpair failed and we were unable to recover it. 00:30:02.998 [2024-12-05 20:49:56.268765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.998 [2024-12-05 20:49:56.268774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.998 qpair failed and we were unable to recover it. 00:30:02.998 [2024-12-05 20:49:56.268850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.998 [2024-12-05 20:49:56.268859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:02.998 qpair failed and we were unable to recover it. 00:30:02.998 Read completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 Read completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 Read completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 Read completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 Read completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 Read completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 Read completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 Read completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 Read completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 Read completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 Read completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 Read completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 Write completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 Read completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 Write completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 Read completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 Read completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 Read completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 Write completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 Read completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 Read completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 Read completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 Write completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 Write completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 Read completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 Write completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 Read completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 Read completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 Read completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 Write completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 Write completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 Write completed with error (sct=0, sc=8) 00:30:02.998 starting I/O failed 00:30:02.998 [2024-12-05 20:49:56.269075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.998 [2024-12-05 20:49:56.269264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.998 [2024-12-05 20:49:56.269316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.998 qpair failed and we were unable to recover it. 00:30:02.998 [2024-12-05 20:49:56.269535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.998 [2024-12-05 20:49:56.269569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.998 qpair failed and we were unable to recover it. 00:30:02.998 [2024-12-05 20:49:56.269685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.998 [2024-12-05 20:49:56.269717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.998 qpair failed and we were unable to recover it. 00:30:02.998 [2024-12-05 20:49:56.269928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.998 [2024-12-05 20:49:56.269959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.998 qpair failed and we were unable to recover it. 00:30:02.998 [2024-12-05 20:49:56.270151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.998 [2024-12-05 20:49:56.270185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.998 qpair failed and we were unable to recover it. 00:30:02.998 [2024-12-05 20:49:56.270304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.998 [2024-12-05 20:49:56.270335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.998 qpair failed and we were unable to recover it. 00:30:02.998 [2024-12-05 20:49:56.270415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.998 [2024-12-05 20:49:56.270425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.998 qpair failed and we were unable to recover it. 00:30:02.998 [2024-12-05 20:49:56.270490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.998 [2024-12-05 20:49:56.270499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.998 qpair failed and we were unable to recover it. 00:30:02.998 [2024-12-05 20:49:56.270648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.998 [2024-12-05 20:49:56.270657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.998 qpair failed and we were unable to recover it. 00:30:02.998 [2024-12-05 20:49:56.270735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.998 [2024-12-05 20:49:56.270745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.998 qpair failed and we were unable to recover it. 00:30:02.998 [2024-12-05 20:49:56.270829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.998 [2024-12-05 20:49:56.270838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.998 qpair failed and we were unable to recover it. 00:30:02.998 [2024-12-05 20:49:56.270974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.998 [2024-12-05 20:49:56.270984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.998 qpair failed and we were unable to recover it. 00:30:02.998 [2024-12-05 20:49:56.271117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.998 [2024-12-05 20:49:56.271128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.998 qpair failed and we were unable to recover it. 00:30:02.998 [2024-12-05 20:49:56.271354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.998 [2024-12-05 20:49:56.271364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.998 qpair failed and we were unable to recover it. 00:30:02.998 [2024-12-05 20:49:56.271437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.998 [2024-12-05 20:49:56.271447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.998 qpair failed and we were unable to recover it. 00:30:02.998 [2024-12-05 20:49:56.271586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.998 [2024-12-05 20:49:56.271596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.998 qpair failed and we were unable to recover it. 00:30:02.998 [2024-12-05 20:49:56.271750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.998 [2024-12-05 20:49:56.271761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.998 qpair failed and we were unable to recover it. 00:30:02.998 [2024-12-05 20:49:56.271835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.998 [2024-12-05 20:49:56.271845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.998 qpair failed and we were unable to recover it. 00:30:02.998 [2024-12-05 20:49:56.271929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.998 [2024-12-05 20:49:56.271939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.272020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.272029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.272198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.272212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.272296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.272306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.272448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.272458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.272714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.272724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.272807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.272816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.272960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.272971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.273053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.273068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.273137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.273147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.273391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.273402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.273557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.273567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.273647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.273656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.273730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.273740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.273886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.273897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.274033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.274044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.274129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.274140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.274266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.274277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.274352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.274362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.274505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.274516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.274602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.274611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.274745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.274755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.274982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.274993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.275071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.275081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.275163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.275173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.275372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.275382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.275465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.275474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.275562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.275572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.275652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.275662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.275803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.275816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.276013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.276023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.276087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.276097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.276184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.276193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.276268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.276278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.276417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.276427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.276502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.276512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:02.999 [2024-12-05 20:49:56.276585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.999 [2024-12-05 20:49:56.276595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:02.999 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.276728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.276738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.276799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.276809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.276942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.276952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.277088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.277099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.277230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.277241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.277371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.277381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.277518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.277538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.277610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.277619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.277763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.277774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.277850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.277859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.278002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.278012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.278072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.278081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.278233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.278243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.278394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.278405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.278538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.278549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.278647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.278660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.278735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.278749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.278886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.278900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.279076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.279090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.279270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.279287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.279375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.279388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.279609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.279623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.279698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.279710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.279883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.279896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.280038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.280052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.280232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.280245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.280378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.280392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.280461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.280473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.280537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.280549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.280775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.280789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.280971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.280984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.281196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.281229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.281437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.281469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.281699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.000 [2024-12-05 20:49:56.281727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.000 qpair failed and we were unable to recover it. 00:30:03.000 [2024-12-05 20:49:56.281903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.281917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.281988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.282001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.282143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.282169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.282313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.282327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.282413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.282427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.282621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.282653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.282784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.282816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.282946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.282977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.283174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.283275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.283486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.283500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.283650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.283664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.283835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.283865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.284055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.284125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.284250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.284282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.284410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.284423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.284560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.284573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.284650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.284664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.284809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.284823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.284898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.284914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.285069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.285083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.285233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.285246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.285393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.285407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.285541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.285555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.285705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.285719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.285895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.285925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.286101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.286135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.286254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.286291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.286450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.286464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.286634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.286666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.286931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.286963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.287161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.287195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.287405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.001 [2024-12-05 20:49:56.287436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.001 qpair failed and we were unable to recover it. 00:30:03.001 [2024-12-05 20:49:56.287704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.287736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.287856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.287888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.288084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.288116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.288235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.288267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.288535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.288566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.288679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.288711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.288981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.289014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.289166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.289205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.289431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.289463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.289633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.289665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.289854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.289885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.290019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.290050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.290244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.290276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.290540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.290572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.290689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.290721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.290969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.291002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.291253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.291287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.291422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.291454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.291624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.291655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.291921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.291953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.292134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.292176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.292361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.292392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.292509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.292541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.292809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.292841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.292958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.292989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.293196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.293229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.293417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.293449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.293581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.293613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.293796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.293828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.294048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.294089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.294213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.294245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.294349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.294381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.294553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.294585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.294860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.294891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.295078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.295111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.295223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.002 [2024-12-05 20:49:56.295254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.002 qpair failed and we were unable to recover it. 00:30:03.002 [2024-12-05 20:49:56.295440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.295471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.295737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.295769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.295873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.295905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.296196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.296228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.296501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.296532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.296650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.296681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.296976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.297008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.297228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.297261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.297470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.297502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.297698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.297729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.297946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.297978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.298187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.298220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.298472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.298504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.298640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.298672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.298888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.298919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.299193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.299226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.299329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.299360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.299568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.299600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.299722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.299753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.299873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.299905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.300107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.300140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.300344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.300376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.300562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.300594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.300778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.300809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.301050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.301105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.301375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.301406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.301617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.301649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.301890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.301921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.302056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.302098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.302340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.302370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.302540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.302571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.302745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.302777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.302892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.302923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.303194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.303227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.303418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.303451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.303632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.003 [2024-12-05 20:49:56.303663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.003 qpair failed and we were unable to recover it. 00:30:03.003 [2024-12-05 20:49:56.303835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.303866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.304158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.304191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.304410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.304442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.304554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.304585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.304699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.304730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.305021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.305052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.305304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.305336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.305537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.305570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.305690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.305722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.305842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.305874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.306087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.306119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.306318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.306349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.306525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.306557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.306683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.306713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.306901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.306933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.307101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.307173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.307382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.307418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.307606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.307638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.307770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.307803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.307939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.307972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.308146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.308180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.308360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.308390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.308578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.308609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.308855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.308887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.309191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.309226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.309360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.309392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.309660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.309692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.309864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.309896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.310168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.310202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.310406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.310439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.310660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.310691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.310967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.310999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.311284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.311318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.311439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.311471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.311730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.311762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.312031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.312074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.004 qpair failed and we were unable to recover it. 00:30:03.004 [2024-12-05 20:49:56.312342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.004 [2024-12-05 20:49:56.312373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.312671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.312703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.313025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.313056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.313262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.313294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.313472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.313503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.313773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.313805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.313925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.313964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.314169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.314217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.314404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.314436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.314547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.314579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.314790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.314820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.314994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.315025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.315241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.315274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.315513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.315544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.315673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.315705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.315879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.315912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.316117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.316149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.316321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.316353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.316487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.316520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.316790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.316822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.317042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.317091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.317311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.317343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.317476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.317507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.317751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.317784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.317965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.317996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.318274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.318307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.318494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.318525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.318728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.318759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.318866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.318897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.319103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.319136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.319254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.319286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.319500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.319531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.005 [2024-12-05 20:49:56.319776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-12-05 20:49:56.319808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.005 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.319988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.320026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.320285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.320317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.320582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.320613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.320725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.320757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.320940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.320971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.321173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.321205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.321447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.321479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.321602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.321634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.321827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.321859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.321976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.322008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.322261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.322293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.322467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.322499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.322677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.322709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.322975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.323006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.323256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.323288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.323401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.323432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.323684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.323716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.323960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.323992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.324201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.324235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.324458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.324489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.324673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.324705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.324843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.324874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.325011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.325043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.325177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.325209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.325477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.325508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.325701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.325732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.325919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.325951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.326068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.326101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.326306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.326338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.326531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.326562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.326812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.326843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.326955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.326986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.327274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.327307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.327520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.327551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.327669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.327700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.327891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-12-05 20:49:56.327923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.006 qpair failed and we were unable to recover it. 00:30:03.006 [2024-12-05 20:49:56.328029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.328071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.328202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.328234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.328423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.328454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.328566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.328598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.328784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.328816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.329072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.329105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.329291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.329323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.329500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.329531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.329801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.329832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.330016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.330047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.330240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.330273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.330399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.330430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.330551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.330582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.330853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.330885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.331154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.331187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.331364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.331396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.331507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.331539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.331722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.331753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.331878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.331909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.332168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.332200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.332379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.332410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.332677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.332709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.332895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.332928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.333229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.333262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.333453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.333484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.333656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.333687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.333885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.333916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.334092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.334124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.334327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.334358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.334530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.334562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.334805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.334837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.335055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.335098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.335220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.335257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.335377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.335408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.335652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.335684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.335927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-12-05 20:49:56.335959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.007 qpair failed and we were unable to recover it. 00:30:03.007 [2024-12-05 20:49:56.336144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.336177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.336420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.336453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.336638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.336670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.336938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.336970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.337261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.337293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.337408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.337439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.337634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.337665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.337931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.337963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.338135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.338166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.338348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.338380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.338575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.338606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.338877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.338908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.339176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.339208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.339326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.339358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.339484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.339515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.339654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.339686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.339925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.339956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.340234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.340266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.340472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.340504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.340671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.340702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.340923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.340955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.341224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.341257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.341440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.341471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.341586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.341624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.341751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.341782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.341965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.341997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.342203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.342235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.342420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.342451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.342693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.342725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.342935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.342966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.343163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.343196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.343311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.343342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.343515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.343547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.343721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.343752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.343996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.344028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.344176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.344209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.344450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.008 [2024-12-05 20:49:56.344482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.008 qpair failed and we were unable to recover it. 00:30:03.008 [2024-12-05 20:49:56.344681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.344714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.344840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.344871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.345083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.345115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.345291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.345322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.345450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.345482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.345691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.345723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.345903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.345934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.346128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.346161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.346332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.346364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.346629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.346661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.346838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.346869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.346982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.347013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.347138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.347170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.347317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.347355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.347544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.347576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.347761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.347793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.347980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.348012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.348211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.348244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.348520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.348551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.348724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.348756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.348888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.348920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.349038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.349090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.349358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.349390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.349587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.349617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.349805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.349837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.350027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.350071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.350355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.350387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.350567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.350599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.350770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.350802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.350921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.350953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.351053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.351094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.351232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.351263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.351399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.351430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.351700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.351731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.352008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.352040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.352234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.352267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.009 [2024-12-05 20:49:56.352398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.009 [2024-12-05 20:49:56.352429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.009 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.352561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.352592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.352719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.352750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.352992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.353025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.353237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.353269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.353474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.353505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.353629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.353662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.353858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.353890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.354023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.354056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.354303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.354335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.354460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.354491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.354731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.354763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.354891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.354923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.355109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.355141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.355272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.355304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.355570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.355601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.355776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.355808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.355991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.356023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.356226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.356260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.356432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.356464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.356580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.356612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.356722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.356753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.356962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.356994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.357102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.357135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.357314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.357346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.357455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.357486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.357802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.357834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.358021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.358053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.358341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.358373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.358544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.358575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.358787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.358819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.358934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.358965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.359157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.359190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.010 [2024-12-05 20:49:56.359433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.010 [2024-12-05 20:49:56.359465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.010 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.359757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.359788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.359959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.359991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.360175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.360208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.360346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.360378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.360670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.360702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.360900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.360932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.361149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.361181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.361427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.361459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.361670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.361702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.361817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.361849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.362140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.362172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.362306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.362343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.362513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.362544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.362664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.362695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.362820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.362851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.363041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.363082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.363268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.363300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.363481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.363513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.363620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.363652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.363838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.363870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.364071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.364103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.364209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.364241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.364444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.364475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.364592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.364623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.364860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.364891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.365142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.365176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.365349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.365380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.365577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.365609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.365812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.365844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.366011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.366042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.366223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.366255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.366449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.366481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.366750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.366781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.366998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.367030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.367229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.367261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.367445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.367476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.011 qpair failed and we were unable to recover it. 00:30:03.011 [2024-12-05 20:49:56.367660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.011 [2024-12-05 20:49:56.367691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.367877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.367909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.368131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.368170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.368355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.368386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.368581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.368612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.368731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.368762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.368946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.368977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.369109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.369142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.369385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.369417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.369589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.369620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.369758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.369790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.370077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.370110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.370282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.370314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.370611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.370643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.370850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.370881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.371051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.371091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.371308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.371340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.371610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.371642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.371885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.371916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.372189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.372221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.372516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.372549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.372672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.372703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.372830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.372862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.372978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.373009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.373163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.373195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.373463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.373495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.373608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.373638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.373739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.373771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.374029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.374071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.374269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.374301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.374578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.374610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.374736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.374768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.374949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.374981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.375153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.375186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.375395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.375427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.375617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.375649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.375848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.012 [2024-12-05 20:49:56.375880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.012 qpair failed and we were unable to recover it. 00:30:03.012 [2024-12-05 20:49:56.376051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.376092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.376300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.376332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.376525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.376556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.376756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.376787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.376974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.377005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.377297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.377330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.377629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.377661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.377831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.377863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.378133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.378164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.378300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.378332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.378603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.378634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.378754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.378785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.379056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.379099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.379300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.379332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.379517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.379549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.379790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.379822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.380051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.380094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.380318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.380350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.380540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.380572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.380763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.380795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.381013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.381044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.381227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.381259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.381445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.381476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.381665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.381697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.381937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.381968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.382086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.382120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.382248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.382279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.382523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.382555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.382826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.382858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.383075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.383107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.383379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.383410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.383683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.383715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.383826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.383858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.384135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.384172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.384384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.384416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.384547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.384579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.384810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.013 [2024-12-05 20:49:56.384841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.013 qpair failed and we were unable to recover it. 00:30:03.013 [2024-12-05 20:49:56.385016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.385047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.385276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.385309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.385579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.385610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.385728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.385760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.386001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.386033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.386217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.386250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.386551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.386582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.386768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.386800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.386914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.386946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.387218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.387249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.387528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.387560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.387671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.387702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.387996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.388027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.388210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.388242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.388358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.388390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.388629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.388661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.388934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.388966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.389238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.389270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.389537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.389568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.389744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.389775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.389966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.389998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.390137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.390169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.390364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.390396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.390571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.390608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.390814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.390845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.391028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.391069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.391270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.391302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.391545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.391577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.391760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.391792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.392070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.392102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.392314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.392346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.392462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.392494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.392624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.392655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.392759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.392790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.393048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.393100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.393345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.393376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.014 [2024-12-05 20:49:56.393514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.014 [2024-12-05 20:49:56.393546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.014 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.393763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.393794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.394075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.394109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.394403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.394435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.394572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.394604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.394876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.394908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.395180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.395214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.395335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.395366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.395490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.395521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.395649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.395680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.395959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.395991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.396237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.396268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.396467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.396498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.396688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.396719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.396898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.396936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.397207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.397240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.397464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.397496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.397766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.397798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.398016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.398046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.398244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.398276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.398477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.398508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.398681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.398712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.398927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.398959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.399201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.399234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.399510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.399542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.399754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.399786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.399959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.399991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.400236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.400268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.400448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.400531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.400801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.400836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.401012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.401044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.401249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.401282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.401478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.401508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.015 qpair failed and we were unable to recover it. 00:30:03.015 [2024-12-05 20:49:56.401684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.015 [2024-12-05 20:49:56.401715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.401963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.401994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.402257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.402290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.402499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.402531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.402729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.402760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.402928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.402960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.403079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.403111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.403292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.403323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.403523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.403563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.403746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.403778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.404083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.404115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.404248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.404279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.404390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.404421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.404661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.404692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.404895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.404926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.405138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.405171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.405443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.405473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.405661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.405692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.405863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.405893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.406141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.406173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.406443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.406474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.406574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.406605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.406749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.406781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.406956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.406988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.407205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.407238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.407448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.407479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.407597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.407628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.407842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.407873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.408005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.408035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.408243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.408274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.408396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.408427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.408557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.408588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.408707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.408738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.408862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.408894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.409084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.409115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.409349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.409418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.016 [2024-12-05 20:49:56.409632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.016 [2024-12-05 20:49:56.409668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.016 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.409789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.409821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.410079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.410112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.410329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.410360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.410474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.410506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.410777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.410807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.411000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.411031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.411231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.411263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.411506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.411538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.411661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.411693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.411824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.411855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.412042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.412082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.412343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.412382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.412656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.412688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.412864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.412896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.413086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.413119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.413238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.413269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.413455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.413486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.413623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.413654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.413840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.413871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.414000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.414031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.414219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.414250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.414450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.414481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.414674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.414705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.414955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.414987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.415091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.415123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.415302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.415335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.415467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.415498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.415675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.415706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.415918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.415949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.416125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.416157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.416348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.416379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.416572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.416603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.416846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.416878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.417020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.417051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.417275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.417307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.417430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.017 [2024-12-05 20:49:56.417462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.017 qpair failed and we were unable to recover it. 00:30:03.017 [2024-12-05 20:49:56.417660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.018 [2024-12-05 20:49:56.417691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.018 qpair failed and we were unable to recover it. 00:30:03.018 [2024-12-05 20:49:56.417874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.018 [2024-12-05 20:49:56.417905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.018 qpair failed and we were unable to recover it. 00:30:03.018 [2024-12-05 20:49:56.418121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.018 [2024-12-05 20:49:56.418169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.018 qpair failed and we were unable to recover it. 00:30:03.018 [2024-12-05 20:49:56.418441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.018 [2024-12-05 20:49:56.418472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.018 qpair failed and we were unable to recover it. 00:30:03.018 [2024-12-05 20:49:56.418747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.018 [2024-12-05 20:49:56.418778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.018 qpair failed and we were unable to recover it. 00:30:03.018 [2024-12-05 20:49:56.418899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.018 [2024-12-05 20:49:56.418930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.018 qpair failed and we were unable to recover it. 00:30:03.018 [2024-12-05 20:49:56.419119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.018 [2024-12-05 20:49:56.419151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.018 qpair failed and we were unable to recover it. 00:30:03.018 [2024-12-05 20:49:56.419264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.018 [2024-12-05 20:49:56.419296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.018 qpair failed and we were unable to recover it. 00:30:03.018 [2024-12-05 20:49:56.419411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.018 [2024-12-05 20:49:56.419442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.018 qpair failed and we were unable to recover it. 00:30:03.018 [2024-12-05 20:49:56.419618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.018 [2024-12-05 20:49:56.419649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.018 qpair failed and we were unable to recover it. 00:30:03.018 [2024-12-05 20:49:56.419821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.018 [2024-12-05 20:49:56.419852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.018 qpair failed and we were unable to recover it. 00:30:03.018 [2024-12-05 20:49:56.420041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.018 [2024-12-05 20:49:56.420080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.018 qpair failed and we were unable to recover it. 00:30:03.018 [2024-12-05 20:49:56.420302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.018 [2024-12-05 20:49:56.420333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.018 qpair failed and we were unable to recover it. 00:30:03.018 [2024-12-05 20:49:56.420466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.018 [2024-12-05 20:49:56.420497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.018 qpair failed and we were unable to recover it. 00:30:03.018 [2024-12-05 20:49:56.420706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.018 [2024-12-05 20:49:56.420740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.018 qpair failed and we were unable to recover it. 00:30:03.018 [2024-12-05 20:49:56.420988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.018 [2024-12-05 20:49:56.421024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.018 qpair failed and we were unable to recover it. 00:30:03.018 [2024-12-05 20:49:56.421215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.018 [2024-12-05 20:49:56.421248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.018 qpair failed and we were unable to recover it. 00:30:03.018 [2024-12-05 20:49:56.421446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.018 [2024-12-05 20:49:56.421477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.018 qpair failed and we were unable to recover it. 00:30:03.018 [2024-12-05 20:49:56.421659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.018 [2024-12-05 20:49:56.421690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.018 qpair failed and we were unable to recover it. 00:30:03.018 [2024-12-05 20:49:56.421960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.018 [2024-12-05 20:49:56.421991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.018 qpair failed and we were unable to recover it. 00:30:03.018 [2024-12-05 20:49:56.422164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.018 [2024-12-05 20:49:56.422198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.018 qpair failed and we were unable to recover it. 00:30:03.018 [2024-12-05 20:49:56.422370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.018 [2024-12-05 20:49:56.422401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.018 qpair failed and we were unable to recover it. 00:30:03.018 [2024-12-05 20:49:56.422647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.018 [2024-12-05 20:49:56.422679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.018 qpair failed and we were unable to recover it. 00:30:03.018 [2024-12-05 20:49:56.422791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.018 [2024-12-05 20:49:56.422822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.018 qpair failed and we were unable to recover it. 00:30:03.018 [2024-12-05 20:49:56.423077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.018 [2024-12-05 20:49:56.423108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.018 qpair failed and we were unable to recover it. 00:30:03.018 [2024-12-05 20:49:56.423283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.018 [2024-12-05 20:49:56.423314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.018 qpair failed and we were unable to recover it. 00:30:03.018 [2024-12-05 20:49:56.423448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.018 [2024-12-05 20:49:56.423481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.018 qpair failed and we were unable to recover it. 00:30:03.018 [2024-12-05 20:49:56.423727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.018 [2024-12-05 20:49:56.423758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.018 qpair failed and we were unable to recover it. 00:30:03.018 [2024-12-05 20:49:56.423936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.018 [2024-12-05 20:49:56.423969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.018 qpair failed and we were unable to recover it. 00:30:03.018 [2024-12-05 20:49:56.424195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.018 [2024-12-05 20:49:56.424228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.018 qpair failed and we were unable to recover it. 00:30:03.018 [2024-12-05 20:49:56.424346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.018 [2024-12-05 20:49:56.424378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.018 qpair failed and we were unable to recover it. 00:30:03.018 [2024-12-05 20:49:56.424639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.300 [2024-12-05 20:49:56.424671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.300 qpair failed and we were unable to recover it. 00:30:03.300 [2024-12-05 20:49:56.424876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.300 [2024-12-05 20:49:56.424908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.300 qpair failed and we were unable to recover it. 00:30:03.300 [2024-12-05 20:49:56.425147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.300 [2024-12-05 20:49:56.425179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.300 qpair failed and we were unable to recover it. 00:30:03.300 [2024-12-05 20:49:56.425314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.300 [2024-12-05 20:49:56.425346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.300 qpair failed and we were unable to recover it. 00:30:03.300 [2024-12-05 20:49:56.425464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.300 [2024-12-05 20:49:56.425497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.300 qpair failed and we were unable to recover it. 00:30:03.300 [2024-12-05 20:49:56.425614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.300 [2024-12-05 20:49:56.425645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.300 qpair failed and we were unable to recover it. 00:30:03.300 [2024-12-05 20:49:56.425765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.300 [2024-12-05 20:49:56.425796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.300 qpair failed and we were unable to recover it. 00:30:03.300 [2024-12-05 20:49:56.425968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.300 [2024-12-05 20:49:56.426000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.300 qpair failed and we were unable to recover it. 00:30:03.300 [2024-12-05 20:49:56.426214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.300 [2024-12-05 20:49:56.426246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.300 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.426432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.426464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.426654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.426685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.426813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.426846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.427110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.427142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.427329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.427361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.427493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.427524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.427655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.427687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.427940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.427971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.428147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.428179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.428362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.428394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.428661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.428692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.428805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.428837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.429004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.429038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.429284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.429315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.429486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.429517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.429730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.429768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.429950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.429981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.430274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.430307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.430578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.430610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.430783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.430814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.431084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.431119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.431317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.431350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.431591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.431622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.431801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.431834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.432053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.432092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.432368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.432400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.432715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.432746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.433000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.433031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.433241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.433271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.433472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.433503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.433778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.433809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.434031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.434072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.434268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.434299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.434489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.434520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.434811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.434842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.301 qpair failed and we were unable to recover it. 00:30:03.301 [2024-12-05 20:49:56.435144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-05 20:49:56.435176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.435309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.435340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.435517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.435548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.435750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.435782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.436073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.436105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.436318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.436350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.436543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.436573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.436894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.436927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.437190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.437223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.437502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.437534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.437815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.437846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.438132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.438165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.438443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.438475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.438668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.438699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.438965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.438997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.439217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.439249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.439494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.439525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.439764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.439795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.439982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.440014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.440271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.440303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.440427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.440464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.440808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.440838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.441096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.441129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.441305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.441337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.441451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.441482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.441743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.441775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.441963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.441993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.442185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.442217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.442433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.442464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.442597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.442628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.442873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.442905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.443041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.443080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.443270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.443300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.443419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.443450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.443590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.443621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.443810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.302 [2024-12-05 20:49:56.443841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.302 qpair failed and we were unable to recover it. 00:30:03.302 [2024-12-05 20:49:56.444053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.444096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.444302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.444334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.444463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.444494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.444599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.444631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.444817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.444849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.445021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.445052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.445273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.445305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.445440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.445472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.445763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.445795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.446038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.446083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.446329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.446360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.446542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.446575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.446871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.446902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.447178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.447211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.447498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.447529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.447809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.447840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.448014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.448046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.448178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.448210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.448477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.448509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.448679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.448710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.448868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.448900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.449143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.449175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.449352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.449384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.449554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.449585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.449787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.449820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.449994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.450026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.450216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.450248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.450392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.450423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.450526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.450558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.450826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.450857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.451138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.451170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.451401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.451433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.451645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.451676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.451919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.451950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.452223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.452256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.303 [2024-12-05 20:49:56.452432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.303 [2024-12-05 20:49:56.452464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.303 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.452676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.452708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.452889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.452921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.453214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.453247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.453469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.453500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.453691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.453723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.453998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.454028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.454286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.454318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.454548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.454581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.454877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.454908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.455190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.455223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.455362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.455395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.455586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.455618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.455748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.455781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.455981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.456011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.456203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.456235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.456351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.456388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.456508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.456540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.456830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.456862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.456978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.457009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.457224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.457256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.457524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.457556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.457836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.457868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.458033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ad540 is same with the state(6) to be set 00:30:03.304 [2024-12-05 20:49:56.458359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.458429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.458751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.458788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.459082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.459117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.459382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.459413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.459690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.459722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.460014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.460046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.460192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.460224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.460419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.460451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.460589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.460621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.460873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.460905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.461023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.461055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.461257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.461288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.461483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.304 [2024-12-05 20:49:56.461514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.304 qpair failed and we were unable to recover it. 00:30:03.304 [2024-12-05 20:49:56.461859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.305 [2024-12-05 20:49:56.461891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.305 qpair failed and we were unable to recover it. 00:30:03.305 [2024-12-05 20:49:56.462022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.305 [2024-12-05 20:49:56.462054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.305 qpair failed and we were unable to recover it. 00:30:03.305 [2024-12-05 20:49:56.462346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.305 [2024-12-05 20:49:56.462379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.305 qpair failed and we were unable to recover it. 00:30:03.305 [2024-12-05 20:49:56.462563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.305 [2024-12-05 20:49:56.462594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.305 qpair failed and we were unable to recover it. 00:30:03.305 [2024-12-05 20:49:56.462815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.305 [2024-12-05 20:49:56.462846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.305 qpair failed and we were unable to recover it. 00:30:03.305 [2024-12-05 20:49:56.463096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.305 [2024-12-05 20:49:56.463127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.305 qpair failed and we were unable to recover it. 00:30:03.305 [2024-12-05 20:49:56.463254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.305 [2024-12-05 20:49:56.463292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.305 qpair failed and we were unable to recover it. 00:30:03.305 [2024-12-05 20:49:56.463566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.305 [2024-12-05 20:49:56.463598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.305 qpair failed and we were unable to recover it. 00:30:03.305 [2024-12-05 20:49:56.463815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.305 [2024-12-05 20:49:56.463846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.305 qpair failed and we were unable to recover it. 00:30:03.305 [2024-12-05 20:49:56.464134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.305 [2024-12-05 20:49:56.464168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.305 qpair failed and we were unable to recover it. 00:30:03.305 [2024-12-05 20:49:56.464451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.305 [2024-12-05 20:49:56.464483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.305 qpair failed and we were unable to recover it. 00:30:03.305 [2024-12-05 20:49:56.464764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.305 [2024-12-05 20:49:56.464795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.305 qpair failed and we were unable to recover it. 00:30:03.305 [2024-12-05 20:49:56.464993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.305 [2024-12-05 20:49:56.465025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.305 qpair failed and we were unable to recover it. 00:30:03.305 [2024-12-05 20:49:56.465222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.305 [2024-12-05 20:49:56.465255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.305 qpair failed and we were unable to recover it. 00:30:03.305 [2024-12-05 20:49:56.465437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.305 [2024-12-05 20:49:56.465469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.305 qpair failed and we were unable to recover it. 00:30:03.305 [2024-12-05 20:49:56.465636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.305 [2024-12-05 20:49:56.465669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.305 qpair failed and we were unable to recover it. 00:30:03.305 [2024-12-05 20:49:56.465960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.305 [2024-12-05 20:49:56.465990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.305 qpair failed and we were unable to recover it. 00:30:03.305 [2024-12-05 20:49:56.466191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.305 [2024-12-05 20:49:56.466224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.305 qpair failed and we were unable to recover it. 00:30:03.305 [2024-12-05 20:49:56.466346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.305 [2024-12-05 20:49:56.466378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.305 qpair failed and we were unable to recover it. 00:30:03.305 [2024-12-05 20:49:56.466565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.305 [2024-12-05 20:49:56.466596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.305 qpair failed and we were unable to recover it. 00:30:03.305 [2024-12-05 20:49:56.466784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.305 [2024-12-05 20:49:56.466817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.305 qpair failed and we were unable to recover it. 00:30:03.305 [2024-12-05 20:49:56.467004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.305 [2024-12-05 20:49:56.467036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.305 qpair failed and we were unable to recover it. 00:30:03.305 [2024-12-05 20:49:56.467249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.305 [2024-12-05 20:49:56.467282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.305 qpair failed and we were unable to recover it. 00:30:03.305 [2024-12-05 20:49:56.467414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.305 [2024-12-05 20:49:56.467445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.305 qpair failed and we were unable to recover it. 00:30:03.305 [2024-12-05 20:49:56.467653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.305 [2024-12-05 20:49:56.467684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.305 qpair failed and we were unable to recover it. 00:30:03.305 [2024-12-05 20:49:56.467894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.305 [2024-12-05 20:49:56.467925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.305 qpair failed and we were unable to recover it. 00:30:03.305 [2024-12-05 20:49:56.468182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.305 [2024-12-05 20:49:56.468214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.305 qpair failed and we were unable to recover it. 00:30:03.305 [2024-12-05 20:49:56.468402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.305 [2024-12-05 20:49:56.468434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.305 qpair failed and we were unable to recover it. 00:30:03.305 [2024-12-05 20:49:56.468571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.305 [2024-12-05 20:49:56.468602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.305 qpair failed and we were unable to recover it. 00:30:03.305 [2024-12-05 20:49:56.468916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.305 [2024-12-05 20:49:56.468947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.305 qpair failed and we were unable to recover it. 00:30:03.305 [2024-12-05 20:49:56.469214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.305 [2024-12-05 20:49:56.469248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.305 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.469525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.469557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.469886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.469918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.470198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.470268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.470510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.470547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.470816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.470849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.470979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.471011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.471226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.471259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.471482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.471514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.471761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.471793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.472083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.472114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.472304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.472337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.472581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.472613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.472877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.472907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.473100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.473134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.473306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.473338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.473560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.473591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.473737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.473769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.474013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.474045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.474272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.474305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.474536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.474567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.474821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.474853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.475102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.475134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.475379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.475411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.475531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.475562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.475862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.475893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.476147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.476178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.476476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.476508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.476790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.476822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.477108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.477141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.477424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.477460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.477596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.477627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.477818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.477850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.478022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.478053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.478185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.478217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.478399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.478431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.478553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.306 [2024-12-05 20:49:56.478586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.306 qpair failed and we were unable to recover it. 00:30:03.306 [2024-12-05 20:49:56.478715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.478746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.478869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.478902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.479196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.479230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.479371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.479403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.479596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.479628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.479773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.479804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.480021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.480052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.480273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.480307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.480438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.480469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.480723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.480754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.480942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.480974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.481170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.481204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.481320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.481350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.481643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.481674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.481875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.481908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.482208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.482241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.482533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.482565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.482833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.482864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.483175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.483210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.483428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.483459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.483713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.483745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.483934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.483966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.484247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.484280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.484497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.484529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.484784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.484815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.484931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.484964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.485175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.485208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.485414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.485444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.485652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.485684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.485896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.485926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.486102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.486135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.486349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.486380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.486636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.486668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.486851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.486888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.487014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.487046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.487311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.307 [2024-12-05 20:49:56.487342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.307 qpair failed and we were unable to recover it. 00:30:03.307 [2024-12-05 20:49:56.487562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.487593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.487719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.487749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.487994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.488025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.488245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.488278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.488420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.488451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.488741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.488774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.489013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.489044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.489257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.489290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.489472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.489504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.489755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.489786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.490085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.490117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.490322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.490354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.490552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.490585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.490867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.490899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.491081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.491114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.491383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.491415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.491528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.491560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.491782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.491814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.491999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.492030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.492353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.492385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.492639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.492671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.492889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.492921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.493182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.493214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.493514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.493546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.493824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.493857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.494051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.494089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.494311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.494343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.494540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.494572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.494845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.494877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.495123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.495155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.495340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.495372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.495654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.495685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.495806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.495838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.496029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.496069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.496319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.496350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.496619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.496650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.308 qpair failed and we were unable to recover it. 00:30:03.308 [2024-12-05 20:49:56.496928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.308 [2024-12-05 20:49:56.496960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.497245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.497284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.497498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.497530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.497701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.497733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.497917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.497949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.498191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.498222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.498518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.498555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.498851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.498883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.499106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.499139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.499415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.499447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.499697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.499728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.499985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.500017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.500329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.500362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.500545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.500576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.500788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.500820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.501096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.501130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.501326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.501358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.501570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.501601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.501827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.501859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.501996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.502028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.502174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.502207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.502476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.502507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.502805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.502837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.503113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.503145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.503327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.503359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.503629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.503659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.503935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.503966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.504203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.504236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.504442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.504474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.504663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.504695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.504951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.504982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.505181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.505214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.505457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.505489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.505663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.505695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.505884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.505916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.506125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.309 [2024-12-05 20:49:56.506158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.309 qpair failed and we were unable to recover it. 00:30:03.309 [2024-12-05 20:49:56.506343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.506374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.310 [2024-12-05 20:49:56.506512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.506543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.310 [2024-12-05 20:49:56.506759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.506792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.310 [2024-12-05 20:49:56.507007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.507038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.310 [2024-12-05 20:49:56.507295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.507326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.310 [2024-12-05 20:49:56.507571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.507609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.310 [2024-12-05 20:49:56.507722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.507753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.310 [2024-12-05 20:49:56.508035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.508074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.310 [2024-12-05 20:49:56.508261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.508293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.310 [2024-12-05 20:49:56.508477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.508508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.310 [2024-12-05 20:49:56.508780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.508812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.310 [2024-12-05 20:49:56.509086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.509119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.310 [2024-12-05 20:49:56.509263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.509296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.310 [2024-12-05 20:49:56.509467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.509499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.310 [2024-12-05 20:49:56.509777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.509809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.310 [2024-12-05 20:49:56.509994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.510026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.310 [2024-12-05 20:49:56.510251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.510283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.310 [2024-12-05 20:49:56.510426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.510457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.310 [2024-12-05 20:49:56.510724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.510755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.310 [2024-12-05 20:49:56.510950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.510981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.310 [2024-12-05 20:49:56.511182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.511215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.310 [2024-12-05 20:49:56.511492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.511524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.310 [2024-12-05 20:49:56.511720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.511752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.310 [2024-12-05 20:49:56.511996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.512028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.310 [2024-12-05 20:49:56.512180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.512213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.310 [2024-12-05 20:49:56.512410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.512441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.310 [2024-12-05 20:49:56.512735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.512767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.310 [2024-12-05 20:49:56.512937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.512969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.310 [2024-12-05 20:49:56.513249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.513282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.310 [2024-12-05 20:49:56.513529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.513562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.310 [2024-12-05 20:49:56.513845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.513877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.310 [2024-12-05 20:49:56.514078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.310 [2024-12-05 20:49:56.514110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.310 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.514335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.514368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.514482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.514514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.514717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.514748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.514961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.514992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.515213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.515247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.515493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.515524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.515670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.515701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.515986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.516018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.516234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.516266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.516437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.516469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.516667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.516698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.516817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.516850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.516985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.517017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.517239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.517277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.517524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.517556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.517777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.517809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.517985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.518016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.518239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.518271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.518474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.518506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.518805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.518835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.519108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.519142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.519326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.519358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.519498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.519529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.519824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.519856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.520131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.520164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.520372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.520404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.520596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.520628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.520816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.520848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.520964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.520997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.521277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.521312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.521590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.521622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.521854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.521885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.522185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.522217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.522349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.522379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.522651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.522683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.311 [2024-12-05 20:49:56.523036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.311 [2024-12-05 20:49:56.523075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.311 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.523283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.523314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.523521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.523553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.523739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.523770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.523985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.524016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.524224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.524257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.524497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.524529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.524829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.524860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.525075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.525108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.525325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.525357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.525495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.525527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.525657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.525689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.525961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.525993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.526277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.526310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.526569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.526601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.526931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.526964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.527214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.527247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.527443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.527475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.527658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.527695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.527888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.527920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.528123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.528156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.528431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.528463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.528657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.528688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.528816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.528848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.529037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.529078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.529268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.529299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.529587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.529618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.529884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.529915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.530199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.530232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.530453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.530485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.530702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.530734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.531011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.531041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.531243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.531276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.531495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.531525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.531839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.531871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.532164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.312 [2024-12-05 20:49:56.532197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.312 qpair failed and we were unable to recover it. 00:30:03.312 [2024-12-05 20:49:56.532438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.532469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.532646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.532678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.532941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.532973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.533110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.533144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.533416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.533447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.533567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.533599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.533905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.533937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.534188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.534221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.534410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.534442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.534643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.534674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.534882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.534913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.535090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.535123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.535436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.535468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.535722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.535753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.536046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.536083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.536271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.536303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.536435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.536466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.536682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.536714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.536862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.536893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.537193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.537226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.537424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.537456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.537741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.537772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.537968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.538007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.538237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.538270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.538407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.538439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.538701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.538732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.538853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.538884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.539184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.539215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.539492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.539524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.539843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.539875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.540127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.540159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.540297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.540329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.540530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.540561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.540855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.540887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.541085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.541118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.541418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.541450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.313 qpair failed and we were unable to recover it. 00:30:03.313 [2024-12-05 20:49:56.541568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.313 [2024-12-05 20:49:56.541600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.541732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.541764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.541942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.541975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.542160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.542192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.542443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.542475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.542649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.542681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.542960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.542990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.543268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.543301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.543478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.543511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.543695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.543726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.544010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.544042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.544241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.544273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.544479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.544511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.544789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.544821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.544943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.544974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.545251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.545284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.545582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.545614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.545926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.545957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.546160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.546194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.546457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.546489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.546684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.546716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.546995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.547026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.547279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.547312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.547507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.547539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.547790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.547822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.547996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.548027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.548233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.548271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.548484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.548515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.548773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.548805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.549085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.549117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.549307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.549339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.549514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.549546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.549911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.549943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.550164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.550195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.550408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.550440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.550549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.550581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.314 qpair failed and we were unable to recover it. 00:30:03.314 [2024-12-05 20:49:56.550887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.314 [2024-12-05 20:49:56.550919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.551198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.551231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.551514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.551546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.551868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.551900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.552185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.552218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.552548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.552579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.552773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.552804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.552983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.553016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.553245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.553278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.553505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.553537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.553811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.553842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.554147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.554179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.554378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.554410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.554608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.554640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.554773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.554804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.555078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.555112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.555310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.555341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.555524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.555595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.555743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.555779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.555988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.556022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.556234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.556267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.556385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.556416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.556617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.556650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.556781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.556812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.557035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.557075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.557324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.557356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.557549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.557581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.557875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.557906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.558129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.558162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.558292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.558324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.558544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.558585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.558791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.558823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.315 qpair failed and we were unable to recover it. 00:30:03.315 [2024-12-05 20:49:56.559123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.315 [2024-12-05 20:49:56.559156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.559357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.559389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.559586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.559617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.559821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.559853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.560048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.560088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.560308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.560339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.560514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.560546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.560759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.560789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.560915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.560946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.561149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.561182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.561408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.561441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.561640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.561671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.561813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.561845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.562043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.562086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.562413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.562444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.562692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.562724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.562942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.562974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.563167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.563199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.563342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.563374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.563651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.563683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.563960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.563991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.564286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.564319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.564592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.564623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.564838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.564870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.565127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.565159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.565465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.565498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.565638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.565670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.565846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.565878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.566149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.566181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.566398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.566430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.566553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.566584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.566818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.566849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.567154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.567187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.567318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.567350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.567533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.567564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.567822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.567854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.316 [2024-12-05 20:49:56.568143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.316 [2024-12-05 20:49:56.568174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.316 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.568429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.568461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.568641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.568678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.568939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.568971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.569198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.569232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.569369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.569400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.569654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.569685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.569793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.569824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.570141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.570173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.570419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.570450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.570592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.570622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.570815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.570848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.571089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.571121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.571271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.571301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.571502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.571534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.571651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.571682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.571978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.572012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.572204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.572236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.572382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.572414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.572635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.572666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.572788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.572820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.573105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.573137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.573354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.573385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.573639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.573671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.573872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.573903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.574212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.574244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.574425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.574456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.574652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.574683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.574958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.574990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.575284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.575318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.575521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.575552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.575801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.575832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.576117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.576149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.576402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.576434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.576612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.576662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.576906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.576938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.317 [2024-12-05 20:49:56.577228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.317 [2024-12-05 20:49:56.577262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.317 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.577460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.577491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.577617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.577649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.577842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.577874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.578155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.578188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.578479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.578511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.578804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.578841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.579116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.579149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.579379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.579412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.579638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.579669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.579847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.579879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.580130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.580162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.580415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.580445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.580713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.580745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.580924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.580956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.581205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.581239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.581419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.581451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.581674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.581706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.581903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.581935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.582112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.582145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.582303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.582335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.582529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.582560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.582916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.582949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.583094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.583126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.583253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.583286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.583480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.583511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.583623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.583654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.583930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.583961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.584164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.584197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.584317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.584350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.584568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.584600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.584918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.584951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.585190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.585223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.585547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.585618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.585938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.585974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.586114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.586148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.318 [2024-12-05 20:49:56.586331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.318 [2024-12-05 20:49:56.586363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.318 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.586613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.586645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.586946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.586980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.587196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.587229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.587408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.587439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.587639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.587670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.588003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.588035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.588282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.588315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.588516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.588547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.588781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.588813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.589075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.589119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.589326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.589358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.589576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.589608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.589809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.589840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.590127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.590160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.590307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.590340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.590546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.590579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.590808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.590839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.591016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.591049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.591324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.591357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.591641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.591673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.591940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.591971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.592241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.592273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.592432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.592464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.592710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.592743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.593036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.593078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.593300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.593333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.593611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.593642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.593921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.593953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.594161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.594195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.594342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.594373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.594647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.594679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.594928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.594960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.319 [2024-12-05 20:49:56.595149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.319 [2024-12-05 20:49:56.595182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.319 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.595385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.595416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.595643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.595675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.595887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.595919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.596138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.596171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.596356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.596388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.596586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.596618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.596941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.596973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.597251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.597285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.597440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.597472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.597703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.597735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.598005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.598038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.598193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.598226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.598411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.598443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.598729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.598761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.599050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.599092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.599386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.599419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.599675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.599712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.599990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.600022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.600262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.600295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.600488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.600520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.600805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.600836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.601124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.601158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.601430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.601461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.601672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.601704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.601925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.601958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.602155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.602187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.602396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.602427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.602708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.602741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.602999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.603031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.603185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.603218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.603415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.603447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.603602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.603634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.603844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.603876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.604197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.604231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.320 qpair failed and we were unable to recover it. 00:30:03.320 [2024-12-05 20:49:56.604453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.320 [2024-12-05 20:49:56.604484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.604753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.604784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.605094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.605129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.605447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.605479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.605819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.605851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.606111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.606145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.606377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.606409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.606693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.606725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.607012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.607045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.607278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.607312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.607593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.607625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.607761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.607793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.608044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.608093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.608309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.608342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.608453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.608484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.608800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.608832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.609100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.609134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.609388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.609420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.609601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.609634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.609787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.609819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.610128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.610161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.610415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.610447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.610747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.610779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.611118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.611152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.611358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.611390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.611623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.611656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.611978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.612011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.612295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.612328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.612529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.612561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.612806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.612838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.613021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.613053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.613345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.613379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.613661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.613693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.613985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.614018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.614277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.614310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.614626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.614658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.321 [2024-12-05 20:49:56.614846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.321 [2024-12-05 20:49:56.614880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.321 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.615195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.615230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.615463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.615496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.615644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.615677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.615967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.615999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.616204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.616237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.616371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.616403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.616551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.616583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.616862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.616895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.617187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.617220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.617524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.617557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.617758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.617789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.618105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.618139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.618324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.618362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.618589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.618621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.618764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.618796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.619108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.619143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.619286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.619319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.619458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.619490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.619719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.619751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.619936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.619969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.620243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.620277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.620481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.620515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.620638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.620670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.620851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.620884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.621201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.621235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.621535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.621568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.621903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.621936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.622212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.622246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.622476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.622509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.622820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.622852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.623035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.623078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.623233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.623265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.623493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.623526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.623728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.623761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.624045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.322 [2024-12-05 20:49:56.624098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.322 qpair failed and we were unable to recover it. 00:30:03.322 [2024-12-05 20:49:56.624376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.624408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.624557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.624589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.624697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.624729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.624839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.624872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.625083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.625118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.625312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.625344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.625470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.625503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.625798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.625830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.626012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.626044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.626256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.626289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.626498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.626529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.626678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.626710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.626951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.626982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.627297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.627330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.627542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.627574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.627848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.627880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.628083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.628116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.628265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.628303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.628432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.628464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.628720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.628752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.629009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.629040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.629201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.629234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.629446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.629478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.629665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.629697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.630017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.630050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.630252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.630284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.630426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.630458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.630580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.630611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.630889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.630921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.631190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.631223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.631429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.631461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.631682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.631713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.632024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.632057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.632312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.632344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.632576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.632608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.632937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.323 [2024-12-05 20:49:56.632968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.323 qpair failed and we were unable to recover it. 00:30:03.323 [2024-12-05 20:49:56.633178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.633213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.324 [2024-12-05 20:49:56.633447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.633479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.324 [2024-12-05 20:49:56.633625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.633656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.324 [2024-12-05 20:49:56.633790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.633822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.324 [2024-12-05 20:49:56.634026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.634065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.324 [2024-12-05 20:49:56.634298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.634331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.324 [2024-12-05 20:49:56.634565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.634596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.324 [2024-12-05 20:49:56.634879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.634912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.324 [2024-12-05 20:49:56.635201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.635234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.324 [2024-12-05 20:49:56.635437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.635469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.324 [2024-12-05 20:49:56.635675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.635708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.324 [2024-12-05 20:49:56.635909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.635941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.324 [2024-12-05 20:49:56.636203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.636235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.324 [2024-12-05 20:49:56.636395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.636426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.324 [2024-12-05 20:49:56.636651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.636683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.324 [2024-12-05 20:49:56.636878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.636910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.324 [2024-12-05 20:49:56.637052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.637097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.324 [2024-12-05 20:49:56.637290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.637323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.324 [2024-12-05 20:49:56.637546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.637578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.324 [2024-12-05 20:49:56.637918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.637950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.324 [2024-12-05 20:49:56.638252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.638285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.324 [2024-12-05 20:49:56.638555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.638592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.324 [2024-12-05 20:49:56.638807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.638840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.324 [2024-12-05 20:49:56.639137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.639170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.324 [2024-12-05 20:49:56.639431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.639462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.324 [2024-12-05 20:49:56.639690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.639721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.324 [2024-12-05 20:49:56.639835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.639867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.324 [2024-12-05 20:49:56.640094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.640129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.324 [2024-12-05 20:49:56.640273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.640306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.324 [2024-12-05 20:49:56.640562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.640595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.324 [2024-12-05 20:49:56.640918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.640949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.324 [2024-12-05 20:49:56.641236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.324 [2024-12-05 20:49:56.641269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.324 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.641559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.641591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.641821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.641853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.642193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.642226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.642373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.642405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.642630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.642662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.642886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.642917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.643098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.643130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.643259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.643292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.643550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.643581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.643905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.643938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.644088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.644121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.644259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.644291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.644508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.644541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.644886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.644918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.645115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.645149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.645355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.645386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.645679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.645711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.646008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.646044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.646347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.646379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.646598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.646632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.646859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.646891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.647228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.647261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.647467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.647498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.647785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.647817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.648041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.648081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.648265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.648299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.648556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.648588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.648861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.648893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.649031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.649069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.649284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.649322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.649530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.649562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.649770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.649802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.649944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.649976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.650283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.650317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.650581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.650613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.650833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.325 [2024-12-05 20:49:56.650864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.325 qpair failed and we were unable to recover it. 00:30:03.325 [2024-12-05 20:49:56.650995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.651027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.651194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.651227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.651487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.651519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.651792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.651825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.652052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.652094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.652333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.652365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.652590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.652622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.652835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.652867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.653125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.653159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.653454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.653485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.653737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.653770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.654083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.654116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.654365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.654397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.654602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.654634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.654775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.654806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.654955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.654987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.655118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.655151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.655386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.655418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.655717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.655749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.655953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.655985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.656194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.656228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.656434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.656466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.656610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.656642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.656871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.656904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.657086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.657119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.657454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.657486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.657693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.657725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.657950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.657982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.658124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.658157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.658365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.658397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.658629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.658660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.658917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.658950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.659159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.659192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.659408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.659447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.659663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.659694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.326 [2024-12-05 20:49:56.659951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.326 [2024-12-05 20:49:56.659983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.326 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.660167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.660200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.660387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.660419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.660549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.660581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.660799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.660831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.661181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.661216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.661498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.661529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.661809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.661840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.662137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.662169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.662387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.662420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.662705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.662737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.662882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.662914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.663196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.663231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.663375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.663407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.663666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.663698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.663984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.664017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.664172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.664205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.664351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.664383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.664507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.664538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.664768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.664801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.665020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.665052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.665222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.665255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.665548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.665580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.665855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.665886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.666191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.666226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.666466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.666498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.666770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.666803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.667001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.667033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.667174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.667206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.667356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.667388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.667506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.667537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.667653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.667685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.667870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.667901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.668113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.668145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.668350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.668383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.668584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.668617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.668975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.327 [2024-12-05 20:49:56.669008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.327 qpair failed and we were unable to recover it. 00:30:03.327 [2024-12-05 20:49:56.669253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.669285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.669420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.669458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.669763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.669795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.669923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.669954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.670139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.670172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.670339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.670370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.670707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.670739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.670865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.670896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.671107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.671140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.671398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.671429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.671796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.671829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.672140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.672173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.672340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.672371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.672578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.672610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.672807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.672839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.673157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.673191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.673446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.673478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.673742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.673774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.673998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.674029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.674327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.674360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.674647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.674677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.674915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.674947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.675227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.675261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.675520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.675551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.675852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.675884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.676106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.676138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.676425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.676456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.676602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.676633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.676884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.676917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.677120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.677152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.677381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.677414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.677556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.677588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.677878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.677909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.678107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.678140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.678295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.678326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.678567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.328 [2024-12-05 20:49:56.678599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.328 qpair failed and we were unable to recover it. 00:30:03.328 [2024-12-05 20:49:56.678936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.678969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.329 [2024-12-05 20:49:56.679268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.679302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.329 [2024-12-05 20:49:56.679545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.679577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.329 [2024-12-05 20:49:56.679792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.679825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.329 [2024-12-05 20:49:56.679957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.679989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.329 [2024-12-05 20:49:56.680209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.680248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.329 [2024-12-05 20:49:56.680511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.680543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.329 [2024-12-05 20:49:56.680827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.680860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.329 [2024-12-05 20:49:56.681175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.681209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.329 [2024-12-05 20:49:56.681499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.681530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.329 [2024-12-05 20:49:56.681671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.681703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.329 [2024-12-05 20:49:56.682010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.682041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.329 [2024-12-05 20:49:56.682271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.682303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.329 [2024-12-05 20:49:56.682503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.682535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.329 [2024-12-05 20:49:56.682718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.682750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.329 [2024-12-05 20:49:56.683030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.683073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.329 [2024-12-05 20:49:56.683307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.683339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.329 [2024-12-05 20:49:56.683484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.683516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.329 [2024-12-05 20:49:56.683805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.683837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.329 [2024-12-05 20:49:56.684146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.684180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.329 [2024-12-05 20:49:56.684445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.684477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.329 [2024-12-05 20:49:56.684782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.684813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.329 [2024-12-05 20:49:56.685074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.685107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.329 [2024-12-05 20:49:56.685292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.685325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.329 [2024-12-05 20:49:56.685582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.685614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.329 [2024-12-05 20:49:56.685928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.685959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.329 [2024-12-05 20:49:56.686165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.686198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.329 [2024-12-05 20:49:56.686332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.686363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.329 [2024-12-05 20:49:56.686591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.686623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.329 [2024-12-05 20:49:56.686928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.686960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.329 [2024-12-05 20:49:56.687083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.687117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.329 [2024-12-05 20:49:56.687342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.329 [2024-12-05 20:49:56.687373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.329 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.687590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.687623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.687928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.687960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.688272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.688305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.688562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.688593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.688860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.688892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.689209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.689243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.689441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.689472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.689670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.689702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.689842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.689874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.690203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.690237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.690420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.690452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.690658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.690690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.690986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.691017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.691235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.691275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.691492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.691524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.691796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.691828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.692073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.692105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.692316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.692349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.692482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.692513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.692749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.692781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.693083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.693117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.693326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.693358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.693554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.693584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.693712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.693743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.693960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.693992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.694244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.694277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.694543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.694575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.694791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.694823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.694967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.694998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.695294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.695327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.695602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.695635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.695842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.695875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.696153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.696186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.696391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.696423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.696736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.696768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.330 [2024-12-05 20:49:56.697035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.330 [2024-12-05 20:49:56.697073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.330 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.697212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.697245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.697439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.697472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.697666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.697699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.697882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.697915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.698194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.698228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.698423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.698455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.698654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.698685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.698973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.699006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.699166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.699198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.699408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.699440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.699643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.699676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.699956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.699988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.700171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.700204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.700450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.700482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.700716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.700748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.701077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.701111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.701386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.701419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.701616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.701654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.701978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.702009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.702263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.702295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.702426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.702458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.702614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.702645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.702829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.702860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.703017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.703049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.703348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.703380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.703574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.703606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.703833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.703865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.704079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.704113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.704387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.704418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.704731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.704763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.704908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.704940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.705194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.705228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.705445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.705477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.705790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.705822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.705965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.705997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.331 [2024-12-05 20:49:56.706212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.331 [2024-12-05 20:49:56.706246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.331 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.706453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.706485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.706624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.706657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.706947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.706979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.707242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.707276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.707399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.707431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.707710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.707742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.707939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.707971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.708196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.708230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.708548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.708580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.708809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.708841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.709040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.709082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.709333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.709366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.709645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.709677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.709962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.709995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.710137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.710170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.710398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.710430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.710625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.710657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.710978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.711010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.711143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.711175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.711384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.711415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.711754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.711786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.712046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.712095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.712399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.712431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.712784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.712816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.713075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.713108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.713389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.713422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.713657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.713690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.713969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.714001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.714344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.714377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.714566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.714598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.714825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.714857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.715049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.715091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.715380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.715412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.715621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.715653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.332 qpair failed and we were unable to recover it. 00:30:03.332 [2024-12-05 20:49:56.715858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.332 [2024-12-05 20:49:56.715889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.333 qpair failed and we were unable to recover it. 00:30:03.333 [2024-12-05 20:49:56.717581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.333 [2024-12-05 20:49:56.717641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.333 qpair failed and we were unable to recover it. 00:30:03.333 [2024-12-05 20:49:56.717888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.333 [2024-12-05 20:49:56.717921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.333 qpair failed and we were unable to recover it. 00:30:03.618 [2024-12-05 20:49:56.718202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.618 [2024-12-05 20:49:56.718237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.618 qpair failed and we were unable to recover it. 00:30:03.618 [2024-12-05 20:49:56.718544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.618 [2024-12-05 20:49:56.718580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.618 qpair failed and we were unable to recover it. 00:30:03.618 [2024-12-05 20:49:56.718797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.618 [2024-12-05 20:49:56.718830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.618 qpair failed and we were unable to recover it. 00:30:03.618 [2024-12-05 20:49:56.719068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.618 [2024-12-05 20:49:56.719101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.618 qpair failed and we were unable to recover it. 00:30:03.618 [2024-12-05 20:49:56.719310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.618 [2024-12-05 20:49:56.719342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.618 qpair failed and we were unable to recover it. 00:30:03.618 [2024-12-05 20:49:56.719553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.618 [2024-12-05 20:49:56.719585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.618 qpair failed and we were unable to recover it. 00:30:03.618 [2024-12-05 20:49:56.719805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.618 [2024-12-05 20:49:56.719837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.618 qpair failed and we were unable to recover it. 00:30:03.618 [2024-12-05 20:49:56.720035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.618 [2024-12-05 20:49:56.720078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.618 qpair failed and we were unable to recover it. 00:30:03.618 [2024-12-05 20:49:56.720344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.618 [2024-12-05 20:49:56.720377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.618 qpair failed and we were unable to recover it. 00:30:03.618 [2024-12-05 20:49:56.720583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.618 [2024-12-05 20:49:56.720614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.618 qpair failed and we were unable to recover it. 00:30:03.618 [2024-12-05 20:49:56.720918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.618 [2024-12-05 20:49:56.720950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.618 qpair failed and we were unable to recover it. 00:30:03.618 [2024-12-05 20:49:56.721249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.618 [2024-12-05 20:49:56.721282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.618 qpair failed and we were unable to recover it. 00:30:03.618 [2024-12-05 20:49:56.721557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.618 [2024-12-05 20:49:56.721589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.721802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.721835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.722095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.722129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.722344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.722376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.722644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.722676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.722883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.722915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.723070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.723103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.723314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.723346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.723636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.723667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.723857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.723889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.724146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.724179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.724389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.724421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.724694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.724731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.724925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.724957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.725225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.725259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.725464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.725495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.725678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.725711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.725999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.726031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.726194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.726226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.726511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.726544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.726745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.726777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.727033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.727074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.727268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.727301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.727454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.727485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.727723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.727756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.727942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.727974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.728194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.728227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.728437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.728469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.728741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.728773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.729069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.729102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.729338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.729371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.729510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.729542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.729658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.729690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.729949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.729981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.730193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.730229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.730405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.619 [2024-12-05 20:49:56.730438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.619 qpair failed and we were unable to recover it. 00:30:03.619 [2024-12-05 20:49:56.730566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.730599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.730830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.730862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.731127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.731159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.731314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.731346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.731460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.731492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.731689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.731720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.731907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.731939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.732201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.732234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.732443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.732474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.732624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.732656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.732873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.732905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.733068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.733101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.733357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.733390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.733671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.733703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.733896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.733927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.734112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.734145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.734427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.734465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.734773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.734805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.735002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.735034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.735259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.735292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.735568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.735601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.735920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.735952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.736151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.736184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.736371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.736403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.736666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.736697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.736986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.737018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.737368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.737400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.737635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.737668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.737940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.737972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.738259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.738292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.738503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.738535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.738819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.738852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.739077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.739109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.739394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.739426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.739609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.739641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.739897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.620 [2024-12-05 20:49:56.739929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.620 qpair failed and we were unable to recover it. 00:30:03.620 [2024-12-05 20:49:56.740194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.740226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.740363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.740395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.740591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.740622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.740881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.740912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.741117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.741150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.741431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.741463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.741680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.741711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.742015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.742047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.742361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.742394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.742656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.742688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.742995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.743028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.743349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.743382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.743665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.743697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.743833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.743865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.744201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.744235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.744514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.744545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.744741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.744772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.745040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.745080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.745347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.745378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.745595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.745626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.745919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.745951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.746248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.746282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.746575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.746608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.746960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.746993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.747218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.747251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.747537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.747568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.747831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.747864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.748201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.748235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.748500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.748532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.748666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.748699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.748982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.749013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.749234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.749267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.749554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.749585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.749821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.749853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.750178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.750211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.750465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.750497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.621 qpair failed and we were unable to recover it. 00:30:03.621 [2024-12-05 20:49:56.750812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.621 [2024-12-05 20:49:56.750844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.750986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.751019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.751184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.751217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.751423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.751455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.751655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.751686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.751908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.751940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.752071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.752104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.752304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.752336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.752446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.752479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.752754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.752786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.753079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.753112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.753390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.753428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.753636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.753668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.753951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.753984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.754182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.754216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.754473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.754505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.754654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.754686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.754958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.754991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.755215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.755248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.755447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.755480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.755677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.755713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.755995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.756027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.756319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.756352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.756608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.756640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.756860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.756893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.757187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.757220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.757432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.757464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.757668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.757700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.757898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.757930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.758152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.758185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.758396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.758429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.758631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.758662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.758859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.758891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.759173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.759206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.759410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.759442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.759576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.759607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.622 [2024-12-05 20:49:56.759866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.622 [2024-12-05 20:49:56.759898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.622 qpair failed and we were unable to recover it. 00:30:03.623 [2024-12-05 20:49:56.760111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.623 [2024-12-05 20:49:56.760143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.623 qpair failed and we were unable to recover it. 00:30:03.623 [2024-12-05 20:49:56.760443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.623 [2024-12-05 20:49:56.760475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.623 qpair failed and we were unable to recover it. 00:30:03.623 [2024-12-05 20:49:56.760722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.623 [2024-12-05 20:49:56.760753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.623 qpair failed and we were unable to recover it. 00:30:03.623 [2024-12-05 20:49:56.761040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.623 [2024-12-05 20:49:56.761083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.623 qpair failed and we were unable to recover it. 00:30:03.623 [2024-12-05 20:49:56.761357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.623 [2024-12-05 20:49:56.761388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.623 qpair failed and we were unable to recover it. 00:30:03.623 [2024-12-05 20:49:56.761595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.623 [2024-12-05 20:49:56.761627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.623 qpair failed and we were unable to recover it. 00:30:03.623 [2024-12-05 20:49:56.761938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.623 [2024-12-05 20:49:56.761970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.623 qpair failed and we were unable to recover it. 00:30:03.623 [2024-12-05 20:49:56.762225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.623 [2024-12-05 20:49:56.762258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.623 qpair failed and we were unable to recover it. 00:30:03.623 [2024-12-05 20:49:56.762405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.623 [2024-12-05 20:49:56.762437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.623 qpair failed and we were unable to recover it. 00:30:03.623 [2024-12-05 20:49:56.762648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.623 [2024-12-05 20:49:56.762680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.623 qpair failed and we were unable to recover it. 00:30:03.623 [2024-12-05 20:49:56.762985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.623 [2024-12-05 20:49:56.763016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.623 qpair failed and we were unable to recover it. 00:30:03.623 [2024-12-05 20:49:56.763134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.623 [2024-12-05 20:49:56.763165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.623 qpair failed and we were unable to recover it. 00:30:03.623 [2024-12-05 20:49:56.763362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.623 [2024-12-05 20:49:56.763394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.623 qpair failed and we were unable to recover it. 00:30:03.623 [2024-12-05 20:49:56.763603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.623 [2024-12-05 20:49:56.763635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.623 qpair failed and we were unable to recover it. 00:30:03.623 [2024-12-05 20:49:56.763892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.623 [2024-12-05 20:49:56.763929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.623 qpair failed and we were unable to recover it. 00:30:03.623 [2024-12-05 20:49:56.764112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.623 [2024-12-05 20:49:56.764145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.623 qpair failed and we were unable to recover it. 00:30:03.623 [2024-12-05 20:49:56.764352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.623 [2024-12-05 20:49:56.764384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.623 qpair failed and we were unable to recover it. 00:30:03.623 [2024-12-05 20:49:56.764574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.623 [2024-12-05 20:49:56.764606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.623 qpair failed and we were unable to recover it. 00:30:03.623 [2024-12-05 20:49:56.764911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.623 [2024-12-05 20:49:56.764943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.623 qpair failed and we were unable to recover it. 00:30:03.623 [2024-12-05 20:49:56.765221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.623 [2024-12-05 20:49:56.765255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.623 qpair failed and we were unable to recover it. 00:30:03.623 [2024-12-05 20:49:56.765417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.623 [2024-12-05 20:49:56.765449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.623 qpair failed and we were unable to recover it. 00:30:03.623 [2024-12-05 20:49:56.765578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.623 [2024-12-05 20:49:56.765609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.623 qpair failed and we were unable to recover it. 00:30:03.623 [2024-12-05 20:49:56.765935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.623 [2024-12-05 20:49:56.765968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.623 qpair failed and we were unable to recover it. 00:30:03.623 [2024-12-05 20:49:56.766165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.623 [2024-12-05 20:49:56.766198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.623 qpair failed and we were unable to recover it. 00:30:03.623 [2024-12-05 20:49:56.766511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.623 [2024-12-05 20:49:56.766544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.623 qpair failed and we were unable to recover it. 00:30:03.623 [2024-12-05 20:49:56.766706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.623 [2024-12-05 20:49:56.766738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.623 qpair failed and we were unable to recover it. 00:30:03.623 [2024-12-05 20:49:56.767069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.623 [2024-12-05 20:49:56.767103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.623 qpair failed and we were unable to recover it. 00:30:03.623 [2024-12-05 20:49:56.767315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.623 [2024-12-05 20:49:56.767347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.623 qpair failed and we were unable to recover it. 00:30:03.623 [2024-12-05 20:49:56.767489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.623 [2024-12-05 20:49:56.767521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.623 qpair failed and we were unable to recover it. 00:30:03.623 [2024-12-05 20:49:56.767825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.623 [2024-12-05 20:49:56.767856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.623 qpair failed and we were unable to recover it. 00:30:03.623 [2024-12-05 20:49:56.768038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.768096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.768361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.768392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.768516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.768547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.768852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.768885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.769147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.769180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.769388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.769420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.769669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.769701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.770015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.770047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.770263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.770295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.770589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.770620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.770921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.770953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.771147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.771182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.771443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.771475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.771686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.771718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.771899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.771931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.772156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.772189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.772428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.772459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.772640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.772672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.772891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.772924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.773044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.773083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.773244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.773276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.773541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.773574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.773772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.773804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.774040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.774080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.774315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.774353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.774518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.774550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.774760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.774792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.774940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.774973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.775173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.775208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.775530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.775563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.775811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.775843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.776077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.776111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.776314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.776347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.776537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.776569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.776772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.776804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.776992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.624 [2024-12-05 20:49:56.777023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.624 qpair failed and we were unable to recover it. 00:30:03.624 [2024-12-05 20:49:56.777226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.777258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.777470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.777502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.777746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.777779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.778031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.778072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.778279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.778311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.778497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.778529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.778789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.778821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.779023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.779055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.779278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.779311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.779572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.779605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.779948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.779980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.780167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.780201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.780462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.780494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.780717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.780749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.780990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.781024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.781201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.781234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.781440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.781472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.781660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.781692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.781958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.781989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.782127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.782161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.782287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.782319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.782570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.782603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.782827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.782859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.782982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.783014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.783296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.783328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.783510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.783541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.783801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.783833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.784090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.784124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.784260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.784298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.784424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.784457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.784715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.784748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.784937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.784969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.785155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.785188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.785314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.785345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.785470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.785502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.785682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.625 [2024-12-05 20:49:56.785714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.625 qpair failed and we were unable to recover it. 00:30:03.625 [2024-12-05 20:49:56.785904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.785936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.786132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.786165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.786379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.786411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.786636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.786669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.786929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.786961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.787273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.787305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.787521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.787555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.787739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.787771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.787973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.788005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.788300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.788334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.788517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.788549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.788688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.788719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.788926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.788957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.789177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.789210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.789418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.789450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.789657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.789690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.789897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.789928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.790077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.790111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.790310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.790341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.790495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.790527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.790677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.790709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.790909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.790942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.791129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.791162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.791421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.791453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.791575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.791608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.791936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.791967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.792230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.792262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.792541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.792574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.792915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.792947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.793215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.793249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.793397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.793429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.793573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.793605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.793827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.793865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.794151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.794183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.794419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.794451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.794657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.794689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.626 qpair failed and we were unable to recover it. 00:30:03.626 [2024-12-05 20:49:56.794876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.626 [2024-12-05 20:49:56.794908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.795194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.795228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.795527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.795559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.795787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.795819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.796132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.796166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.796428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.796461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.796710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.796741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.796940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.796972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.797245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.797279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.797547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.797578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.797869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.797901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.798185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.798218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.798512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.798544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.798759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.798791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.798978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.799009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.799244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.799278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.799475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.799507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.799773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.799806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.800020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.800053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.800265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.800297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.800580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.800612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.800806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.800838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.801043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.801086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.801285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.801318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.801517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.801549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.801826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.801858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.802143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.802177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.802387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.802418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.802734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.802766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.803044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.803085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.803397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.803429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.803560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.803591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.803820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.803852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.804053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.804111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.804269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.804301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.804557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.804589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.627 [2024-12-05 20:49:56.804907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.627 [2024-12-05 20:49:56.804944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.627 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.805129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.805163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.805447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.805479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.805703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.805734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.805940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.805973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.806160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.806193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.806332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.806364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.806567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.806600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.806940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.806972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.807193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.807225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.807420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.807452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.807654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.807686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.807985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.808017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.808313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.808347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.808581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.808616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.808831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.808863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.809151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.809184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.809448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.809480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.809709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.809741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.810005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.810037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.810346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.810380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.810523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.810556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.810870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.810902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.811030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.811071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.811389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.811420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.811681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.811713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.811936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.811968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.812162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.812197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.812505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.812538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.812687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.812719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.812945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.812977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.813167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.813201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.628 [2024-12-05 20:49:56.813411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.628 [2024-12-05 20:49:56.813443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.628 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.813588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.813619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.813887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.813920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.814132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.814166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.814419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.814452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.814768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.814800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.815008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.815040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.815245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.815278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.815562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.815600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.815929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.815961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.816268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.816302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.816465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.816497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.816756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.816788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.816933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.816964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.817208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.817241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.817384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.817416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.817702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.817734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.817997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.818029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.818241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.818274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.818478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.818511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.818644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.818677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.818969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.819001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.819304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.819338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.819489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.819521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.819729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.819761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.820055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.820109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.820368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.820400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.820548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.820581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.820864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.820896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.821207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.821241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.821474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.821505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.821867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.821900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.822171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.822204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.822487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.822519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.822654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.822685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.822899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.822933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.629 [2024-12-05 20:49:56.823150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.629 [2024-12-05 20:49:56.823183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.629 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.823365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.823398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.823546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.823578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.823702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.823733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.824038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.824092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.824300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.824332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.824536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.824568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.824769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.824801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.824997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.825029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.825252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.825284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.825481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.825514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.825747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.825778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.825995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.826033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.826271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.826304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.826501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.826533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.826767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.826800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.827084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.827118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.827326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.827357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.827500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.827532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.827808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.827841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.828100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.828132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.828274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.828307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.828565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.828597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.828896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.828928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.829212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.829245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.829535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.829567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.829844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.829877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.830190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.830222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.830382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.830414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.830708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.830741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.830937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.830970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.831154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.831187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.831477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.831510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.831799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.831832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.832089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.832122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.832262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.832294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.832503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.630 [2024-12-05 20:49:56.832536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.630 qpair failed and we were unable to recover it. 00:30:03.630 [2024-12-05 20:49:56.832659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.832691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.832972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.833004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.833295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.833329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.833611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.833644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.833932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.833964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.834311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.834345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.834528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.834561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.834823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.834855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.835111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.835144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.835331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.835363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.835622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.835654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.835972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.836005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.836284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.836316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.836513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.836545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.836745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.836777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.837071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.837110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.837421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.837453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.837735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.837767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.837980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.838013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.838232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.838265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.838545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.838578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.838933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.838965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.839251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.839284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.839512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.839544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.839784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.839816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.840099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.840132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.840333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.840365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.840652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.840684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.840974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.841007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.841238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.841272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.841551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.841582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.841861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.841893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.842164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.842197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.842423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.842455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.842783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.842816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.843103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.631 [2024-12-05 20:49:56.843137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.631 qpair failed and we were unable to recover it. 00:30:03.631 [2024-12-05 20:49:56.843292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.843324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.843609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.843640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.843835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.843868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.844129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.844163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.844296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.844328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.844607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.844639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.844861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.844893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.845153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.845186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.845370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.845401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.845656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.845688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.845959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.845992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.846195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.846228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.846354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.846386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.846569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.846600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.846797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.846829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.847025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.847077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.847342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.847374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.847629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.847661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.847922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.847953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.848142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.848182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.848397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.848427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.848703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.848735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.849017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.849049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.849260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.849291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.849489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.849522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.849798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.849829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.850103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.850135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.850342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.850374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.850585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.850618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.850761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.850793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.850994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.851025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.851399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.851433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.851689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.851721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.851996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.632 [2024-12-05 20:49:56.852028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.632 qpair failed and we were unable to recover it. 00:30:03.632 [2024-12-05 20:49:56.852244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.852277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.852486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.852518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.852762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.852794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.852925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.852957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.853181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.853215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.853521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.853553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.853912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.853944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.854164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.854197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.854396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.854428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.854639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.854670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.854824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.854857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.854996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.855027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.855300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.855334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.855543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.855574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.855717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.855748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.855963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.855995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.856181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.856213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.856446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.856479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.856663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.856695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.856980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.857012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.857203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.857236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.857491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.857523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.857767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.857799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.857984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.858016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.858237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.858271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.858504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.858541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.858843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.858876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.859101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.859135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.859290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.859321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.859512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.859544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.859781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.859813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.859947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.859979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.860267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.860301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.860509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.860540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.860771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.633 [2024-12-05 20:49:56.860804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.633 qpair failed and we were unable to recover it. 00:30:03.633 [2024-12-05 20:49:56.860960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.860992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.861285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.861318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.861544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.861576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.861794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.861825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.862040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.862080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.862302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.862334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.862594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.862627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.862945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.862977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.863263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.863298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.863500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.863531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.863816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.863848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.864137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.864171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.864453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.864485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.864710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.864742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.865054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.865094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.865327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.865358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.865664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.865696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.865898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.865931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.866258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.866291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.866555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.866587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.866894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.866926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.867192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.867225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.867470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.867502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.867761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.867793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.868022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.868053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.868213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.868245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.868452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.868484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.868709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.868741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.868946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.868978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.869126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.869159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.869461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.869498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.869694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.869726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.870024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.870055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.870264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.870296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.870533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.870565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.634 qpair failed and we were unable to recover it. 00:30:03.634 [2024-12-05 20:49:56.870797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.634 [2024-12-05 20:49:56.870828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.871101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.871135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.871395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.871427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.871553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.871585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.871883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.871914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.872129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.872162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.872325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.872357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.872650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.872681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.872892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.872923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.873128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.873161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.873441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.873473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.873696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.873728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.873860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.873892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.874151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.874185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.874388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.874420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.874571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.874603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.874929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.874961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.875202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.875236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.875510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.875542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.875753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.875784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.876010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.876043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.876336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.876368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.876793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.876871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.877098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.877138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.877424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.877457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.877758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.877790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.878121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.878155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.878415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.878448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.878702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.878733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.878996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.879028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.879260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.879294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.879555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.879587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.879919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.879952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.880224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.880257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.880387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.880420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.880631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.880662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.880971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.881004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.881137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.881170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.635 qpair failed and we were unable to recover it. 00:30:03.635 [2024-12-05 20:49:56.881308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.635 [2024-12-05 20:49:56.881340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.881520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.881551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.881889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.881922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.882200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.882233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.882393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.882426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.882629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.882662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.882950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.882982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.883298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.883332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.883615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.883648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.883888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.883919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.884197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.884230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.884376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.884416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.884622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.884653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.884864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.884897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.885100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.885133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.885341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.885373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.885575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.885607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.885838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.885871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.886186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.886220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.886431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.886463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.886618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.886650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.886944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.886976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.887263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.887296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.887498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.887530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.887883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.887915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.888240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.888275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.888472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.888504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.888814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.888846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.889084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.889118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.889401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.889432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.889646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.889678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.889937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.889968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.890291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.890325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.890564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.890597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.890856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.890887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.891210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.891244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.891448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.891480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.891791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.891822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.892107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.892146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.892317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.636 [2024-12-05 20:49:56.892349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.636 qpair failed and we were unable to recover it. 00:30:03.636 [2024-12-05 20:49:56.892583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.892616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.892904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.892937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.893188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.893222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.893361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.893395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.893625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.893658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.893778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.893810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.894096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.894129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.894267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.894299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.894481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.894513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.894872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.894906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.895120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.895153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.895366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.895399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.895666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.895699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.895912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.895944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.896202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.896234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.896455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.896488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.896630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.896662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.896860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.896894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.897042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.897102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.897305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.897337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.897559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.897592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.897795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.897830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.898066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.898099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.898243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.898276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.898426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.898459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.898760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.898792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.899015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.899046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.899283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.899316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.899536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.899568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.899882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.899914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.900222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.900255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.900388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.900421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.900629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.900661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.900862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.900894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.901206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.901240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.637 qpair failed and we were unable to recover it. 00:30:03.637 [2024-12-05 20:49:56.901437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.637 [2024-12-05 20:49:56.901469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.901651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.901683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.901959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.901993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.902171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.902203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.902341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.902373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.902523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.902555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.902830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.902862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.903131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.903165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.903376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.903408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.903614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.903646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.903968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.904000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.904283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.904316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.904547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.904580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.904900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.904934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.905214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.905250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.905431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.905463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.905594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.905626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.905881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.905914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.906109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.906142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.906401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.906435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.906700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.906732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.907009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.907041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.907209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.907243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.907461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.907493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.907705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.907738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.907966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.907998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.908335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.908369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.908627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.908659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.908792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.908824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.909004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.909040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.909165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.909195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.909435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.909473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.909684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.909718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.909973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.910004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.910135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.910169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.910309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.910343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.910630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.910662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.910796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.910827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.911008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.911042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.911335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.911369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.911495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.911527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.911795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.638 [2024-12-05 20:49:56.911828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.638 qpair failed and we were unable to recover it. 00:30:03.638 [2024-12-05 20:49:56.912072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.912107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.912241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.912273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.912473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.912506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.912879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.912913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.913020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.913052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.913194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.913225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.913362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.913395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.913613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.913646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.913875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.913907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.914033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.914075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.914280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.914312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.914544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.914577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.914780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.914812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.915012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.915043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.915291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.915324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.915519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.915551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.915799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.915837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.916033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.916074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.916226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.916259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.916454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.916486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.916811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.916844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.916970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.917001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.917235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.917267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.917474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.917509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.917821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.917853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.918143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.918177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.918465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.918498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.918650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.918681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.918915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.918948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.919182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.919217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.919362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.919394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.919544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.919576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.919822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.919854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.920087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.920121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.920360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.920392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.920604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.920637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.920918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.920952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.921208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.921241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.921530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.921562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.921808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.921840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.922048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.922091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.922309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.922341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.922499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.639 [2024-12-05 20:49:56.922532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.639 qpair failed and we were unable to recover it. 00:30:03.639 [2024-12-05 20:49:56.922679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.922718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.922972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.923005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.923230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.923262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.923556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.923588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.923896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.923927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.924200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.924233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.924525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.924558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.924802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.924834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.925039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.925080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.925234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.925266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.925421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.925453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.925773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.925806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.926005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.926038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.926255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.926288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.926545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.926619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.926859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.926897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.927108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.927143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.927351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.927383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.927693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.927725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.928037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.928084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.928313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.928344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.928692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.928725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.929029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.929073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.929336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.929369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.929621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.929653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.929882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.929913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.930101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.930134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.930388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.930430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.930663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.930694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.930893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.930925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.931039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.931080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.931296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.931327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.931608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.931639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.931941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.931973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.932130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.932162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.932422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.932454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.932583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.932614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.932842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.932875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.933221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.933254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.933541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.933574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.933811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.933843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.934075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.934109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.640 qpair failed and we were unable to recover it. 00:30:03.640 [2024-12-05 20:49:56.934367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.640 [2024-12-05 20:49:56.934399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.934654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.934686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.934886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.934917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.935131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.935164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.935370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.935402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.935588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.935620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.935926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.935957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.936169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.936201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.936466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.936498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.936681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.936712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.936942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.936975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.937272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.937305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.937635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.937711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.937928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.937964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.938195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.938245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.938530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.938563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.938851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.938884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.939159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.939193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.939427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.939460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.939690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.939721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.939928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.939959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.940196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.940232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.940428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.940460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.940737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.940769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.940996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.941028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.941301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.941347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.941499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.941530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.941655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.941687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.941923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.941955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.942295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.942328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.942617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.942649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.942954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.942986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.943192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.943224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.943493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.943525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.943827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.943859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.944072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.944105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.944387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.944418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.944655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.944688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.641 [2024-12-05 20:49:56.944973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.641 [2024-12-05 20:49:56.945005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.641 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.945147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.945179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.945372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.945404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.945663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.945696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.945951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.945982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.946273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.946305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.946533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.946566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.946722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.946754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.947006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.947038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.947171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.947203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.947357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.947389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.947593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.947625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.947829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.947861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.948081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.948115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.948304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.948338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.948595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.948626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.948829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.948861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.949078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.949112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.949236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.949269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.949474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.949506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.949795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.949827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.950025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.950057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.950327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.950360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.950507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.950540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.950796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.950828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.951139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.951172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.951332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.951363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.951498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.951536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.951722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.951753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.952034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.952077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.952221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.952254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.952451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.952483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.952801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.952834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.953132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.953164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.953440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.953472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.953606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.953638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.953896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.953928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.954113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.954146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.954366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.954398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.954613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.954645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.954844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.954876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.955191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.955226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.642 qpair failed and we were unable to recover it. 00:30:03.642 [2024-12-05 20:49:56.955360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.642 [2024-12-05 20:49:56.955392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.955594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.955626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.955826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.955858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.956116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.956149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.956344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.956377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.956664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.956696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.956880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.956912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.957194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.957227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.957497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.957529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.957889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.957921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.958192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.958225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.958492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.958524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.958846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.958920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.959250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.959288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.959575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.959608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.959906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.959937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.960225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.960258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.960514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.960547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.960855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.960887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.961099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.961133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.961272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.961305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.961608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.961639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.961847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.961879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.962140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.962175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.962460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.962493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.962806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.962838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.963102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.963136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.963415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.963448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.963653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.963684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.963953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.963985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.964173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.964207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.964542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.964575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.964803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.964835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.965073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.965106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.965311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.965343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.965630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.965662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.965860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.965892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.966204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.966238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.966439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.966471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.966597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.966636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.966896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.966930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.967253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.967286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.967487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.967520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.967883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.967915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.968135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.643 [2024-12-05 20:49:56.968168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.643 qpair failed and we were unable to recover it. 00:30:03.643 [2024-12-05 20:49:56.968434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.968467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.968815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.968847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.969112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.969145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.969399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.969431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.969615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.969647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.969873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.969906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.970107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.970142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.970357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.970389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.970581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.970614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.970837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.970871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.971081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.971115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.971295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.971328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.971548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.971580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.971878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.971909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.972030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.972071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.972280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.972313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.972626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.972658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.972844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.972876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.973208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.973242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.973450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.973482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.973608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.973639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.973852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.973890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.974095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.974129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.974353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.974385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.974675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.974707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.974998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.975030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.975252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.975286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.975548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.975581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.975892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.975924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.976134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.976167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.976322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.976355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.976489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.976521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.976739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.976771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.977036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.977077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.977213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.977244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.977462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.977495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.977792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.977825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.978030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.978073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.978295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.978328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.978582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.978615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.978796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.978828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.979024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.979057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.979327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-12-05 20:49:56.979360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.644 qpair failed and we were unable to recover it. 00:30:03.644 [2024-12-05 20:49:56.979513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.979546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.979769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.979801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.979927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.979959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.980164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.980198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.980459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.980490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.980696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.980734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.981013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.981045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.981245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.981279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.981411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.981443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.981712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.981745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.981948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.981979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.982238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.982273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.982486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.982518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.982820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.982852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.983125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.983159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.983315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.983348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.983535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.983567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.983773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.983805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.984091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.984124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.984337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.984371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.984512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.984543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.984825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.984857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.984993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.985027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.985225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.985261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.985576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.985610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.985857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.985890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.986217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.986250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.986416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.986448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.986646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.986679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.986987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.987019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.987153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.987186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.987320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.987351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.987623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.987655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.987884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.987917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.988103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.988136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.988409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.988442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.988581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.988613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.988944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.988977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.989251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.989284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.989507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.989539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.989819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.989851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.990055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.990095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.990300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.990331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.990527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-12-05 20:49:56.990559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.645 qpair failed and we were unable to recover it. 00:30:03.645 [2024-12-05 20:49:56.990768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.990801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.991067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.991100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.991408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.991445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.991578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.991610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.991924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.991955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.992102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.992135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.992357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.992389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.992589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.992620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.992843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.992876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.993185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.993218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.993478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.993510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.993820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.993852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.994052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.994096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.994316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.994348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.994605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.994637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.994744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.994776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.995084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.995117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.995302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.995334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.995616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.995649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.995844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.995875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.995999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.996030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.996338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.996371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.996597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.996629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.996834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.996865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.997153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.997186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.997421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.997453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.997711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.997742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.998081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.998115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.998398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.998430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.998696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.998734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.998969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.999001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.999307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.999340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.999603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.999634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:56.999826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:56.999858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:57.000117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:57.000153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:57.000461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:57.000493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:57.000699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:57.000731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:57.001016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:57.001048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:57.001360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:57.001393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:57.001678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:57.001710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:57.001978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:57.002010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:57.002264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:57.002297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:57.002607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:57.002640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:57.002848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:57.002881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:57.003070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:57.003103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:57.003335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:57.003367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:57.003644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:57.003677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:57.003880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:57.003911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.646 qpair failed and we were unable to recover it. 00:30:03.646 [2024-12-05 20:49:57.004191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-12-05 20:49:57.004225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.004512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.004545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.004826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.004858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.005073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.005107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.005422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.005454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.005765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.005797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.006091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.006124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.006353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.006386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.006714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.006752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.007023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.007055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.007348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.007380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.007686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.007718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.007992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.008023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.008325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.008359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.008569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.008601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.008802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.008834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.009016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.009047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.009263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.009296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.009561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.009593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.009811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.009843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.010124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.010159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.010427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.010460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.010587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.010619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.010902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.010935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.011160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.011192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.011398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.011430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.011738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.011771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.011998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.012030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.012348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.012381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.012636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.012668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.012933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.012965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.013163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.013196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.013473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.013506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.013694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.013726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.014023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.014055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.014339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.014371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.014661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.014694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.014948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.014979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.015290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.015324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.015570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.015602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.015863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.015895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.016119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.016152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.016418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.016451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.016636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.016668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.016875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.016906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.017134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.017167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.017452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.017484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.017689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.017722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.017919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-12-05 20:49:57.017950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.647 qpair failed and we were unable to recover it. 00:30:03.647 [2024-12-05 20:49:57.018089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.018123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.018336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.018368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.018650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.018681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.018882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.018914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.019073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.019107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.019313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.019345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.019624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.019656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.019841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.019872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.020097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.020130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.020406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.020439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.020727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.020759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.021020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.021051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.021367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.021399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.021700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.021732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.022001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.022032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.022193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.022226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.022536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.022570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.022771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.022803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.022999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.023030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.023250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.023283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.023466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.023499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.023781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.023813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.024003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.024035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.024242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.024274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.024473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.024505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.024818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.024850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.025082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.025116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.025453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.025492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.025778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.025810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.026088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.026121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.026359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.026391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.026675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.026706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.027007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.027038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.027313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.027346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.027667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.027700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.027913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.027945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.028151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.028186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.028518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.028551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.028830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.028861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.029126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.029160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.029371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.029404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.029726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.029758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.030029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.030073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.030363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.030396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.030633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.030665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.030927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.030959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.031181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.031215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.031401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.031433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.648 [2024-12-05 20:49:57.031711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.648 [2024-12-05 20:49:57.031743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.648 qpair failed and we were unable to recover it. 00:30:03.649 [2024-12-05 20:49:57.031951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.649 [2024-12-05 20:49:57.031984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.649 qpair failed and we were unable to recover it. 00:30:03.649 [2024-12-05 20:49:57.032182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.649 [2024-12-05 20:49:57.032217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.649 qpair failed and we were unable to recover it. 00:30:03.649 [2024-12-05 20:49:57.032425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.921 [2024-12-05 20:49:57.032458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.921 qpair failed and we were unable to recover it. 00:30:03.921 [2024-12-05 20:49:57.032720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.921 [2024-12-05 20:49:57.032755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.921 qpair failed and we were unable to recover it. 00:30:03.921 [2024-12-05 20:49:57.032954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.921 [2024-12-05 20:49:57.032987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.921 qpair failed and we were unable to recover it. 00:30:03.921 [2024-12-05 20:49:57.033171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.921 [2024-12-05 20:49:57.033211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.921 qpair failed and we were unable to recover it. 00:30:03.921 [2024-12-05 20:49:57.033411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.921 [2024-12-05 20:49:57.033442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.921 qpair failed and we were unable to recover it. 00:30:03.921 [2024-12-05 20:49:57.033753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.921 [2024-12-05 20:49:57.033785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.921 qpair failed and we were unable to recover it. 00:30:03.921 [2024-12-05 20:49:57.034091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.921 [2024-12-05 20:49:57.034124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.921 qpair failed and we were unable to recover it. 00:30:03.921 [2024-12-05 20:49:57.034393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.921 [2024-12-05 20:49:57.034426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.921 qpair failed and we were unable to recover it. 00:30:03.921 [2024-12-05 20:49:57.034682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.921 [2024-12-05 20:49:57.034714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.921 qpair failed and we were unable to recover it. 00:30:03.921 [2024-12-05 20:49:57.035025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.921 [2024-12-05 20:49:57.035073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.921 qpair failed and we were unable to recover it. 00:30:03.921 [2024-12-05 20:49:57.035336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.921 [2024-12-05 20:49:57.035368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.921 qpair failed and we were unable to recover it. 00:30:03.921 [2024-12-05 20:49:57.035655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.921 [2024-12-05 20:49:57.035687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.921 qpair failed and we were unable to recover it. 00:30:03.921 [2024-12-05 20:49:57.035903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.035934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.036147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.036181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.036440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.036473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.036674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.036706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.036916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.036948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.037191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.037224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.037511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.037543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.037802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.037835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.038107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.038140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.038430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.038463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.038750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.038783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.039077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.039112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.039389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.039421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.039707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.039739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.040028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.040082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.040296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.040328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.040507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.040539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.040823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.040856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.041173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.041213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.041491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.041524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.041793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.041825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.042056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.042098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.042396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.042428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.042698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.042731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.042917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.042948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.043132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.043165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.043429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.043463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.043772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.043803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.044114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.044148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.044443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.044476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.044694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.044726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.044934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.044966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.045253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.045288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.045574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.045606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.045890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.045922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.046126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.046160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.046436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.046468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.046675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.046707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.046986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.047019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.047278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.047311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.047627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.047659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.047949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.047981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.048161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.048194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.048479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.048512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.048798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.048830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.049116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.049151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.922 qpair failed and we were unable to recover it. 00:30:03.922 [2024-12-05 20:49:57.049468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.922 [2024-12-05 20:49:57.049500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.049809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.049842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.050107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.050139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.050329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.050362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.050564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.050598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.050925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.050957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.051238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.051272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.051558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.051591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.051877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.051908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.052137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.052170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.052366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.052398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.052649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.052680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.052825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.052857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.053082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.053121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.053428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.053460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.053739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.053770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.054045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.054089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.054382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.054415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.054682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.054715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.054899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.054931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.055215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.055248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.055537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.055570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.055762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.055794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.056097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.056130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.056400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.056433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.056708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.056740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.057001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.057034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.057346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.057380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.057686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.057718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.057914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.057944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.058142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.058175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.058409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.058441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.058747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.058779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.058985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.059017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.059328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.059362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.059652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.059685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.059967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.059999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.060232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.060266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.060553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.060586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.060867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.060899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.061194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.061235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.061537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.061570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.061830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.061861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.062109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.062142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.062340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.062373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.923 qpair failed and we were unable to recover it. 00:30:03.923 [2024-12-05 20:49:57.062655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.923 [2024-12-05 20:49:57.062688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.062972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.063005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.063349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.063382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.063537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.063569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.063777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.063809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.064003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.064036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.064360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.064392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.064593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.064626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.064912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.064944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.065251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.065284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.065569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.065601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.065816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.065848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.066139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.066172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.066453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.066485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.066743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.066775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.067030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.067071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.067269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.067301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.067583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.067617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.067886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.067917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.068217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.068251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.068563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.068595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.068789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.068820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.069031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.069080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.069364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.069397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.069666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.069698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.069998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.070029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.070239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.070273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.070554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.070587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.070769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.070802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.071093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.071127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.071335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.071367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.071650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.071683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.071978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.072009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.072292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.072324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.072597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.072629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.072916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.072949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.073236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.073271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.073555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.073587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.073812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.073843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.074026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.074067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.074323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.074355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.074663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.074695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.074990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.075021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.075343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.075376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.075661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.924 [2024-12-05 20:49:57.075693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.924 qpair failed and we were unable to recover it. 00:30:03.924 [2024-12-05 20:49:57.075891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.075924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.076184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.076217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.076475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.076508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.076725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.076757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.076944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.076976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.077292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.077326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.077587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.077618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.077815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.077847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.078028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.078069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.078356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.078388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.078607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.078640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.078919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.078951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.079210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.079242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.079439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.079470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.079753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.079785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.080053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.080111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.080371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.080403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.080708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.080740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.081044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.081089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.081409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.081441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.081744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.081775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.081986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.082024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.082221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.082255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.082540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.082573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.082793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.082825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.083018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.083049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.083322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.083355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.083533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.083566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.083848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.083881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.084191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.084225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.084542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.084575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.084874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.084906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.085181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.085215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.085508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.085541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.085822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.085854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.086147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.086181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.086419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.086451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.086650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.086681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.086968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.087000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.087230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.925 [2024-12-05 20:49:57.087264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.925 qpair failed and we were unable to recover it. 00:30:03.925 [2024-12-05 20:49:57.087411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.087443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.087623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.087655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.087779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.087812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.088097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.088131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.088391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.088423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.088618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.088657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.088836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.088869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.089153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.089186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.089496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.089528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.089798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.089831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.090144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.090178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.090383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.090415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.090598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.090630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.090837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.090869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.091081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.091115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.091376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.091409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.091545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.091577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.091867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.091899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.092211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.092244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.092488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.092521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.092796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.092828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.093127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.093160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.093419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.093451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.093761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.093793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.094056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.094098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.094397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.094429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.094558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.094591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.094857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.094889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.095178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.095211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.095497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.095530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.095750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.095782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.096084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.096117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.096396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.096435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.096698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.096730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.096960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.096992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.097194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.097229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.097442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.097474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.097760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.097792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.097989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.098021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.098255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.098289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.098579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.098612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.098798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.098831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.099094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.099127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.099411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.099444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.099737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.099769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.100091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.926 [2024-12-05 20:49:57.100125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.926 qpair failed and we were unable to recover it. 00:30:03.926 [2024-12-05 20:49:57.100394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.100427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.100635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.100667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.100864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.100896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.101180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.101215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.101501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.101533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.101796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.101828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.102068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.102101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.102364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.102396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.102702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.102734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.103005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.103037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.103331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.103364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.103613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.103645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.103849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.103882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.104104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.104137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.104433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.104466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.104670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.104703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.104961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.104993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.105180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.105214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.105497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.105529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.105817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.105849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.106083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.106116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.106375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.106408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.106717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.106749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.107049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.107090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.107352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.107385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.107686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.107718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.107993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.108025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.108324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.108359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.108658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.108690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.108963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.108995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.109296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.109330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.109606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.109638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.109819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.109851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.110048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.110093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.110362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.110393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.110604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.110637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.110836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.110868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.111178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.111212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.111435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.111468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.111734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.111767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.112078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.112112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.112407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.112440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.112736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.112768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.113049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.113091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.113311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.927 [2024-12-05 20:49:57.113343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.927 qpair failed and we were unable to recover it. 00:30:03.927 [2024-12-05 20:49:57.113676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.113709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.113918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.113950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.114158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.114191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.114479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.114512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.114819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.114852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.115139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.115173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.115466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.115498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.115696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.115728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.115928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.115960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.116178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.116218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.116399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.116432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.116645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.116677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.116962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.116995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.117187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.117220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.117476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.117508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.117767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.117799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.118110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.118143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.118415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.118448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.118661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.118693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.118930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.118962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.119239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.119273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.119563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.119595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.119880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.119913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.120206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.120240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.120517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.120549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.120843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.120875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.121156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.121190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.121481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.121513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.121720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.121752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.122039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.122080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.122357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.122390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.122718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.122750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.122936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.122968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.123247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.123281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.123570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.123602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.123913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.123945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.124234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.124274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.124496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.124528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.124817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.124848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.125159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.125193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.125454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.125487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.125788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.125821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.126036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.126077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.126367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.126399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.126650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.126683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.928 [2024-12-05 20:49:57.126963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.928 [2024-12-05 20:49:57.126994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.928 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.127283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.127317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.127604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.127636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.127838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.127871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.128154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.128187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.128452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.128484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.128788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.128820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.129091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.129124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.129406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.129438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.129656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.129688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.129981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.130012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.130323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.130357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.130637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.130670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.130956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.130988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.131276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.131309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.131592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.131625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.131830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.131861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.131982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.132014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.132242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.132288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.132492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.132525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.132704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.132737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.133021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.133054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.133344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.133376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.133634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.133666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.133924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.133956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.134270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.134304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.134570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.134603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.134885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.134918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.135208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.135241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.135527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.135559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.135743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.135776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.136004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.136036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.136351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.136384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.136643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.136675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.136983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.137016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.137236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.137269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.137550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.137582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.137858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.137892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.138164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.138198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.138496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.138528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.138747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.929 [2024-12-05 20:49:57.138780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.929 qpair failed and we were unable to recover it. 00:30:03.929 [2024-12-05 20:49:57.139034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.139075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.139282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.139314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.139616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.139649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.139918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.139950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.140146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.140179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.140456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.140489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.140674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.140706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.140966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.140998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.141204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.141237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.141512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.141545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.141806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.141838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.142043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.142086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.142359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.142391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.142624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.142656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.142857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.142890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.143146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.143180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.143484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.143516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.143734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.143765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.144103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.144138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.144446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.144479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.144712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.144745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.145025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.145057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.145351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.145383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.145522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.145555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.145757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.145789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.146137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.146171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.146395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.146428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.146692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.146724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.147003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.147036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.147275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.147308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.147502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.147535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.147739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.147771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.148074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.148108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.148379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.148411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.148620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.148653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.148915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.148947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.149246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.149280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.149480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.149513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.149796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.149828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.149977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.150009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.150274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.150307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.150564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.150597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.150860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.150893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.151178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.151212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.151501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.151533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.151818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.151856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.930 qpair failed and we were unable to recover it. 00:30:03.930 [2024-12-05 20:49:57.152090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.930 [2024-12-05 20:49:57.152123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.152397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.152430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.152648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.152680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.152822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.152854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.153074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.153107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.153406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.153438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.153702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.153734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.154005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.154038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.154337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.154370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.154571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.154602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.154889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.154922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.155186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.155220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.155441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.155473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.155765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.155798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.156003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.156036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.156304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.156337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.156637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.156669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.156881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.156914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.157203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.157236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.157522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.157554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.157839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.157872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.158162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.158195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.158477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.158509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.158766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.158799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.159069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.159104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.159332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.159364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.159635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.159673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.159966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.160000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.160262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.160296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.160519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.160551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.160816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.160848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.161083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.161116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.161326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.161358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.161466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.161498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.161782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.161813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.162112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.162146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.162363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.162395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.162578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.162610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.162882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.162914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.163120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.163154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.163399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.163431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.163576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.163608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.163795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.163827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.164109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.164142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.164428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.164461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.164743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.164775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.931 qpair failed and we were unable to recover it. 00:30:03.931 [2024-12-05 20:49:57.165073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.931 [2024-12-05 20:49:57.165107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.165368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.165400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.165706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.165737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.165875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.165907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.166103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.166136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.166431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.166464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.166667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.166699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.166955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.166987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.167283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.167318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.167530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.167562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.167846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.167878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.168192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.168226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.168506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.168538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.168756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.168789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.168972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.169004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.169297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.169330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.169554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.169586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.169875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.169907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.170090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.170124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.170379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.170412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.170722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.170755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.171023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.171056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.171352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.171384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.171689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.171721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.171860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.171892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.172201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.172234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.172416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.172448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.172707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.172740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.172872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.172904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.173164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.173197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.173486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.173519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.173783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.173814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.174043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.174087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.174345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.174378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.174557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.174589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.174884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.174917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.175113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.175147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.175459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.175491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.175689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.175721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.175906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.175938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.176228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.176261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.176543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.176575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.176908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.176941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.177199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.177232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.177544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.177576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.177888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.177920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.178197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.178230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.932 [2024-12-05 20:49:57.178527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.932 [2024-12-05 20:49:57.178559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.932 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.178839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.178877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.179108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.179142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.179359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.179392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.179575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.179607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.179864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.179897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.180087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.180121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.180355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.180387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.180582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.180614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.180895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.180927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.181146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.181179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.181491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.181523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.181784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.181817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.182004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.182036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.182251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.182284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.182485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.182518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.182650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.182682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.182996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.183028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.183327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.183361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.183565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.183597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.183799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.183831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.184036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.184079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.184306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.184339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.184517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.184549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.184745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.184777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.185069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.185102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.185392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.185424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.185633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.185665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.185937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.185975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.186234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.186269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.186471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.186503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.186698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.186730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.187010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.187043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.187259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.187291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.187578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.187610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.187872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.187905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.188098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.188132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.188330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.188362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.188644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.188676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.188939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.188971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.189279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.189312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.189507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.189539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.189832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.189864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.933 [2024-12-05 20:49:57.190145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.933 [2024-12-05 20:49:57.190179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.933 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.190468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.190501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.190698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.190730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.190988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.191021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.191313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.191346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.191630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.191662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.191951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.191983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.192169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.192204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.192484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.192517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.192809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.192841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.193122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.193155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.193281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.193313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.193570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.193609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.193905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.193937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.194183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.194218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.194362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.194394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.194680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.194712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.195019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.195052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.195265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.195298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.195502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.195534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.195736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.195769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.196054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.196111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.196296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.196329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.196524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.196556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.196835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.196867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.197072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.197106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.197296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.197329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.197609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.197641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.197900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.197932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.198249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.198283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.198561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.198594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.198871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.198903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.199178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.199211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.199514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.199546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.199813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.199845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.200151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.200185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.200492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.200524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.200788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.200820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.201129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.201162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.201345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.201377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.201644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.201677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.201985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.202018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.202154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.202187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.202469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.202500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.202712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.202744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.202942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.202974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.203170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.203203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.934 [2024-12-05 20:49:57.203461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.934 [2024-12-05 20:49:57.203494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.934 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.203673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.203705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.204014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.204046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.204359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.204392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.204686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.204719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.204995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.205027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.205325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.205359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.205639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.205672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.205947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.205978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.206254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.206287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.206576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.206609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.206897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.206929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.207218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.207252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.207541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.207573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.207861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.207893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.208106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.208140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.208323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.208357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.208552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.208584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.208866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.208899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.209130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.209164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.209456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.209488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.209765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.209798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.210026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.210066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.210294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.210326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.210630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.210662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.210865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.210898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.211155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.211189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.211492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.211524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.211837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.211869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.212009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.212041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.212310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.212343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.212634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.212666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.212976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.213008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.213214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.213254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.213520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.213552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.213756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.213788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.214044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.214085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.214370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.214401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.214556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.214588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.214894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.214926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.215080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.215114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.215311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.215344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.215630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.215662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.215952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.215985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.216268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.216302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.216590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.216623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.216907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.216940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.217234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.935 [2024-12-05 20:49:57.217268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.935 qpair failed and we were unable to recover it. 00:30:03.935 [2024-12-05 20:49:57.217545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.217578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.217870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.217902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.218097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.218130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.218406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.218438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.218714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.218746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.219042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.219083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.219359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.219392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.219644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.219676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.219987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.220019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.220229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.220263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.220548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.220580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.220886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.220918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.221094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.221134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.221352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.221385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.221642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.221673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.221982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.222014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.222212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.222245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.222561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.222593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.222794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.222826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.223112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.223147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.223343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.223376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.223644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.223676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.223858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.223890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.224085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.224117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.224343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.224375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.224663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.224696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.224982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.225015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.225301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.225335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.225599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.225631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.225816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.225848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.226026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.226068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.226355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.226388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.226685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.226717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.226942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.226974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.227284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.227319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.227576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.227609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.227818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.227851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.228055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.228098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.228357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.228389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.228615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.228648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.228855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.228887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.229169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.229202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.936 qpair failed and we were unable to recover it. 00:30:03.936 [2024-12-05 20:49:57.229485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.936 [2024-12-05 20:49:57.229517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.229730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.229762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.230047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.230088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.230211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.230243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.230525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.230557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.230805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.230836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.231015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.231047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.231261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.231293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.231473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.231505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.231783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.231815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.232083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.232116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.232328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.232361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.232598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.232630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.232899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.232931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.233127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.233160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.233428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.233460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.233604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.233636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.233917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.233949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.234269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.234303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.234563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.234596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.234881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.234912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.235138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.235172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.235372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.235404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.235712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.235744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.235995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.236027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.236229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.236262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.236415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.236447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.236670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.236702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.236986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.237018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.237227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.237260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.237547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.237579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.237887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.237920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.238186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.238219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.238521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.238553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.238758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.238790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.239079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.239112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.239428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.239460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.239724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.239755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.240036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.240084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.240388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.240420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.240624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.240656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.240938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.240971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.241283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.241317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.241518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.241550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.241755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.241787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.937 qpair failed and we were unable to recover it. 00:30:03.937 [2024-12-05 20:49:57.242080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.937 [2024-12-05 20:49:57.242113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.242356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.242388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.242650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.242682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.242991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.243024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.243334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.243366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.243568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.243600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.243879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.243910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.244112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.244146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.244332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.244364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.244645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.244677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.244902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.244933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.245192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.245225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.245537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.245570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.245847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.245879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.246078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.246111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.246375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.246407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.246629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.246661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.246935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.246967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.247160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.247194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.247492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.247524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.247742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.247785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.248101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.248135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.248432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.248463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.248742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.248774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.248912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.248944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.249149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.249183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.249369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.249401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.249610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.249643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.249853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.249885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.250096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.250129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.250364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.250396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.250678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.250709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.250949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.250981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.251306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.251341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.251553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.251586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.251860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.251892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.252082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.252115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.252299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.252331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.252587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.252619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.252907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.252938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.253233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.253266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.253556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.253588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.253818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.253850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.254033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.254074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.254255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.254286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.254570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.938 [2024-12-05 20:49:57.254603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.938 qpair failed and we were unable to recover it. 00:30:03.938 [2024-12-05 20:49:57.254825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.254856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.255038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.255087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.255319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.255351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.255631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.255663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.255956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.255988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.256295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.256328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.256594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.256626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.256832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.256864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.257088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.257122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.257329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.257361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.257664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.257696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.257899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.257931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.258132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.258164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.258345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.258377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.258667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.258699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.259030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.259070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.259368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.259400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.259608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.259641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.259926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.259958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.260248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.260280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.260567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.260600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.260889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.260921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.261178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.261212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.261528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.261560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.261817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.261850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.262167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.262199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.262461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.262493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.262788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.262820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.263111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.263144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.263428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.263459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.263749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.263785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.264078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.264111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.264392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.264424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.264607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.264639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.264925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.264958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.265289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.265322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.265543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.265576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.265764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.265797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.266094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.266127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.266308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.266339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.266562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.266594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.266880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.266912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.267147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.267180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.267401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.267433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.267656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.267688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.267976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.268008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.268314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.939 [2024-12-05 20:49:57.268348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.939 qpair failed and we were unable to recover it. 00:30:03.939 [2024-12-05 20:49:57.268553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.268586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.268784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.268816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.269123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.269156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.269424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.269455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.269643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.269675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.269863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.269895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.270230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.270263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.270485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.270517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.270874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.270907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.271123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.271156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.271365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.271398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.271666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.271698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.271930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.271962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.272244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.272276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.272460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.272492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.272715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.272746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.273032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.273079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.273304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.273336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.273460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.273491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.273670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.273702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.274005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.274037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.274339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.274371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.274573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.274611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.274795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.274826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.274954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.274986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.275194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.275228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.275541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.275573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.275864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.275894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.276199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.276233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.276443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.276476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.276656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.276687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.276975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.277007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.277222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.277255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.277547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.277578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.277869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.277901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.278094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.278127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.278420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.278453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.278565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.278597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.278880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.278913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.279117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.279149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.279346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.279379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.279516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.279547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.940 [2024-12-05 20:49:57.279855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.940 [2024-12-05 20:49:57.279886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.940 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.280005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.280037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.280337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.280369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.280640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.280672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.280879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.280911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.281119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.281152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.281579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.281618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.281904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.281947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.282239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.282274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.282573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.282604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.282880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.282912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.283206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.283239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.283542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.283574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.283890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.283922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.284149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.284182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.284438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.284470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.284663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.284695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.284978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.285011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.285303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.285335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.285637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.285669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.285948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.285979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.286277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.286311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.286444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.286477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.286732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.286763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.287075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.287109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.287401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.287434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.287747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.287778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.288040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.288081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.288376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.288408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.288518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.288551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.288837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.288869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.289133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.289167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.289410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.289443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.289730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.289762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.289945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.289978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.290322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.290361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.290578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.290611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.290859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.290892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.291098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.291130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.291323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.291356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.291543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.291575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.291771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.291803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.292084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.292116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.292342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.292374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.292661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.292693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.292892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.292924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.293160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.293193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.941 [2024-12-05 20:49:57.293506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.941 [2024-12-05 20:49:57.293539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.941 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.293730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.293762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.294054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.294096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.294311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.294343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.294541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.294572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.294814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.294845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.295154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.295187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.295456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.295488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.295776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.295807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.296030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.296071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.296267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.296299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.296556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.296588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.296948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.296980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.297183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.297217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.297485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.297517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.297730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.297762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.297979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.298011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.298244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.298277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.298457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.298490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.298697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.298728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.299024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.299056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.299333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.299365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.299665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.299697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.299933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.299965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.300118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.300151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.300419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.300451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.300659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.300691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.300959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.300991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.301255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.301295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.301597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.301629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.301890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.301921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.302205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.302239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.302529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.302561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.302673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.302704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.302991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.303023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.303243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.303276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.303571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.303603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.303831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.303863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.304069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.304102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.304360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.304392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.304697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.304729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.305027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.305069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.305320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.305353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.305611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.305643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.305839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.305871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.306131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.306165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.306348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.306381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.306662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.306694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.942 qpair failed and we were unable to recover it. 00:30:03.942 [2024-12-05 20:49:57.306955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.942 [2024-12-05 20:49:57.306988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.307213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.307246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.307534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.307566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.307762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.307794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.307973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.308005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.308272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.308305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.308511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.308543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.308741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.308780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.309055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.309097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.309382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.309414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.309616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.309648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.309901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.309933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.310190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.310225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.310485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.310518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.310773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.310805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.311114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.311147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.311443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.311476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.311670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.311702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.311888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.311920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.312055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.312096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.312384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.312417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.312698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.312730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.313023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.313056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.313250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.313281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.313560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.313593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.313882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.313914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.314225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.314258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.314539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.314571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.314883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.314914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.315179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.315211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.315401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.315434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.315718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.315750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.316024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.316056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.316266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.316298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.316420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.316458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.316656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.316688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.317020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.317052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.317343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.317375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.317661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.317693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.317971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.318003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.318316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.318349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.318553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.318586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.318784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.318816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.319021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.319078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.319374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.319418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.319654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.319687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.319912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.319945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.320229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.943 [2024-12-05 20:49:57.320264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.943 qpair failed and we were unable to recover it. 00:30:03.943 [2024-12-05 20:49:57.320455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.320488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.320797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.320829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.321083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.321116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.321428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.321460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.321663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.321695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.321981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.322014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.322329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.322363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.322683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.322715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.323000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.323032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.323325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.323358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.323633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.323666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.323965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.323998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.324270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.324304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.324505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.324538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.324727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.324760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.325019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.325053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.325296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.325329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.325526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.325559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.325842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.325875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.326103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.326136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.326427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.326460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.326647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.326679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.326941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.326973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.327257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.327291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.327580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.327613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.327814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.327846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.328050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.328093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.328389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.328423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.328687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.328719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.329024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.329066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.329331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.329365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.329580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.329612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.329832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.329864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.330078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.330112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.330401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.330433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.330615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.330648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.330915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.330947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.331233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.331268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.331580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.331612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.331869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.331902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.332215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.332249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.332554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.332587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.332858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.944 [2024-12-05 20:49:57.332890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.944 qpair failed and we were unable to recover it. 00:30:03.944 [2024-12-05 20:49:57.333193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.333226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.333438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.333471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.333653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.333685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.333938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.333970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.334152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.334186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.334478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.334511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.334718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.334750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.335052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.335098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.335336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.335369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.335632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.335664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.335972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.336005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.336271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.336311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.336494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.336527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.336817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.336850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.337132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.337167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.337435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.337468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.337670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.337702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.337903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.337936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.338216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.338250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.338436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.338469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.338752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.338784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.339079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.339113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.339236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.339268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.339474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.339506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.339764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.339796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.340103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.340137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.340401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.340433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.340640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.340672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.340873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.340906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.341188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.341222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.341513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.341545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.341689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.341721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.341900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.341932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.342218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.342252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.342540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.342572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.342785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.342817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.343096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.343130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.343423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.343456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.343637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.343675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.343955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.343988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.344278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.344312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.344591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.344623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.344812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.344844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.345103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.345137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.345318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.345351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.345606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.345639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.345832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.345865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.945 [2024-12-05 20:49:57.346177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.945 [2024-12-05 20:49:57.346212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.945 qpair failed and we were unable to recover it. 00:30:03.946 [2024-12-05 20:49:57.346477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.946 [2024-12-05 20:49:57.346509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.946 qpair failed and we were unable to recover it. 00:30:03.946 [2024-12-05 20:49:57.346797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.946 [2024-12-05 20:49:57.346829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.946 qpair failed and we were unable to recover it. 00:30:03.946 [2024-12-05 20:49:57.347030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.946 [2024-12-05 20:49:57.347073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.946 qpair failed and we were unable to recover it. 00:30:03.946 [2024-12-05 20:49:57.347296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.946 [2024-12-05 20:49:57.347328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.946 qpair failed and we were unable to recover it. 00:30:03.946 [2024-12-05 20:49:57.347531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.946 [2024-12-05 20:49:57.347564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.946 qpair failed and we were unable to recover it. 00:30:03.946 [2024-12-05 20:49:57.347773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.946 [2024-12-05 20:49:57.347805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.946 qpair failed and we were unable to recover it. 00:30:03.946 [2024-12-05 20:49:57.348011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.946 [2024-12-05 20:49:57.348043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.946 qpair failed and we were unable to recover it. 00:30:03.946 [2024-12-05 20:49:57.348324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.946 [2024-12-05 20:49:57.348357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.946 qpair failed and we were unable to recover it. 00:30:03.946 [2024-12-05 20:49:57.348637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.946 [2024-12-05 20:49:57.348668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.946 qpair failed and we were unable to recover it. 00:30:03.946 [2024-12-05 20:49:57.348971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.946 [2024-12-05 20:49:57.349002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.946 qpair failed and we were unable to recover it. 00:30:03.946 [2024-12-05 20:49:57.349301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.946 [2024-12-05 20:49:57.349335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.946 qpair failed and we were unable to recover it. 00:30:03.946 [2024-12-05 20:49:57.349614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.946 [2024-12-05 20:49:57.349648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.946 qpair failed and we were unable to recover it. 00:30:03.946 [2024-12-05 20:49:57.349912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.946 [2024-12-05 20:49:57.349945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.946 qpair failed and we were unable to recover it. 00:30:03.946 [2024-12-05 20:49:57.350236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.946 [2024-12-05 20:49:57.350270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.946 qpair failed and we were unable to recover it. 00:30:03.946 [2024-12-05 20:49:57.350500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.946 [2024-12-05 20:49:57.350534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.946 qpair failed and we were unable to recover it. 00:30:03.946 [2024-12-05 20:49:57.350817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.946 [2024-12-05 20:49:57.350848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.946 qpair failed and we were unable to recover it. 00:30:03.946 [2024-12-05 20:49:57.351141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.946 [2024-12-05 20:49:57.351175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.946 qpair failed and we were unable to recover it. 00:30:03.946 [2024-12-05 20:49:57.351477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.946 [2024-12-05 20:49:57.351510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:03.946 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.351786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.351822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.352028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.352072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.352261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.352294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.352474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.352506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.352718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.352751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.353015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.353047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.353240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.353273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.353534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.353566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.353825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.353857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.354039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.354092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.354282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.354314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.354598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.354631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.354825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.354857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.355121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.355157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.355447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.355480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.355791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.355824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.356026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.356067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.356352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.356385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.356682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.356714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.356994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.357026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.357318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.357351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.357578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.357611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.357792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.357824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.358022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.358054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.358185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.358217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.358424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.358456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.358737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.358769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.359090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.359125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.359413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.359446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.359724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.359755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.360090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.360123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.360407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.360439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.360724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.360757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.361047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.361091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.361361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.361394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.361591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.361623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.361897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.361929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.362132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.362166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.362350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.362382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.362664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.362696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.362992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.363038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.363338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.363371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.363509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.363541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.363739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.363771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.364051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.220 [2024-12-05 20:49:57.364095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.220 qpair failed and we were unable to recover it. 00:30:04.220 [2024-12-05 20:49:57.364338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.364371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.364629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.364662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.364789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.364821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.365100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.365134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.365397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.365430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.365613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.365645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.365856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.365888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.366157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.366191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.366483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.366515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.366796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.366829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.367118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.367152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.367437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.367470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.367729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.367762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.367948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.367980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.368235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.368269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.368451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.368483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.368779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.368811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.369034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.369076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.369360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.369393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.369668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.369701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.369997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.370029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.370239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.370273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.370553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.370591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.370876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.370909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.371102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.371135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.371322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.371355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.371535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.371568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.371754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.371786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.372017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.372050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.372316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.372349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.372649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.372681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.373010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.373042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.373351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.373384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.373645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.373677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.373873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.373905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.374101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.374135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.374400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.374433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.374734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.374765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.375042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.375095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.375368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.375401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.375682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.375714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.375986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.376019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.376233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.376266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.376461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.376493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.376770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.376802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.377149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.377184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.377469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.377501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.377622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.221 [2024-12-05 20:49:57.377654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.221 qpair failed and we were unable to recover it. 00:30:04.221 [2024-12-05 20:49:57.377913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.377945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.378224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.378262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.378546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.378578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.378862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.378894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.379156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.379190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.379479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.379511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.379816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.379848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.380109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.380142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.380402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.380435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.380661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.380693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.380974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.381006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.381241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.381275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.381499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.381531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.381710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.381741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.381923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.381955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.382281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.382316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.382593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.382624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.382809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.382841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.383043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.383086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.383342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.383374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.383670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.383702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.383828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.383860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.384167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.384201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.384408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.384440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.384722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.384755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.384962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.384995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.385122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.385155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.385294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.385327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.385524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.385556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.385903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.385937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.386219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.386252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.386539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.386571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.386769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.386802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.386999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.387031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.387323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.387356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.387586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.387618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.387904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.387936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.388046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.388088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.388406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.388439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.388699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.388732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.389042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.389082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.389391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.389424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.389685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.389719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.390029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.390071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.390377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.390410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.390609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.390642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.390839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.390871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.391135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.222 [2024-12-05 20:49:57.391169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.222 qpair failed and we were unable to recover it. 00:30:04.222 [2024-12-05 20:49:57.391303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.391336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.391520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.391551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.391785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.391817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.392129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.392163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.392446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.392479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.392733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.392765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.392968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.393001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.393217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.393251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.393516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.393549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.393857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.393889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.394090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.394124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.394379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.394411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.394721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.394752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.395048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.395101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.395332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.395365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.395625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.395658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.395935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.395967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.396264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.396298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.396605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.396637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.396907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.396939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.397241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.397275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.397558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.397597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.397876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.397908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.398172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.398205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.398400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.398433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.398716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.398748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.399037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.399078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.399287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.399321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.399526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.399558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.399752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.399785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.399968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.400001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.400267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.400300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.400501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.400533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.400810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.400843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.401114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.401148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.401448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.401481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.401754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.401785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.402090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.402124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.402361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.402393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.402679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.402710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.402963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.402995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.403336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.403371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.223 qpair failed and we were unable to recover it. 00:30:04.223 [2024-12-05 20:49:57.403513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.223 [2024-12-05 20:49:57.403545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.403853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.403885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.404148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.404183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.404472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.404504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.404724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.404756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.404939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.404971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.405109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.405150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.405436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.405468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.405650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.405682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.405964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.405996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.406307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.406340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.406630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.406662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.406908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.406940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.407260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.407294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.407534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.407566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.407698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.407730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.408011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.408043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.408334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.408367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.408650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.408682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.408945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.408978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.409288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.409323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.409577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.409608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.409896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.409929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.410189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.410222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.410480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.410512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.410644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.410676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.410960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.410993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.411306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.411340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.411628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.411660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.411892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.411924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.412154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.412188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.412472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.412504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.412794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.412826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.413111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.413145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.413435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.413468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.413673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.413705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.413986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.414018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.414244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.414277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.414475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.414507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.414710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.414742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.414946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.414978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.415195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.415229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.415431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.415463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.415703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.415735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.416042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.416085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.416379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.416411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.416710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.416742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.417023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.417056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.224 [2024-12-05 20:49:57.417265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.224 [2024-12-05 20:49:57.417297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.224 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.417577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.417609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.417808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.417840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.418037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.418081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.418393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.418425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.418680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.418712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.419008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.419041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.419246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.419279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.419539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.419572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.419711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.419743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.419888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.419920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.420178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.420212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.420470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.420503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.420801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.420834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.421140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.421174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.421300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.421333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.421521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.421553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.421840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.421873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.422139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.422173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.422481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.422514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.422813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.422846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.423069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.423103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.423394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.423427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.423685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.423717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.424005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.424038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.424352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.424386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.424675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.424713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.424990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.425022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.425251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.425286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.425569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.425602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.425893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.425925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.426156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.426190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.426372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.426404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.426601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.426633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.426817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.426848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.427054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.427097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.427336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.427368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.427639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.427671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.427964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.427996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.428313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.428345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.428598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.428630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.428752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.428784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.428911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.428943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.429226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.429260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.429543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.429575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.429759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.429791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.429986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.430018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.430230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.430263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.430442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.225 [2024-12-05 20:49:57.430474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.225 qpair failed and we were unable to recover it. 00:30:04.225 [2024-12-05 20:49:57.430749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.430782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.431082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.431116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.431328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.431361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.431546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.431578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.431814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.431852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.432165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.432199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.432513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.432546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.432830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.432863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.432985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.433017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.433223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.433257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.433395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.433428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.433638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.433670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.433872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.433905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.434139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.434172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.434483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.434516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.434793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.434825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.435035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.435078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.435342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.435375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.435610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.435642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.435913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.435945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.436246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.436280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.436567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.436599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.436818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.436850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.437085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.437118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.437406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.437438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.437775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.437807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.437941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.437973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.438175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.438208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.438476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.438509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.438801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.438834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.439023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.439055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.439348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.439386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.439572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.439605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.439872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.439903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.440099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.440133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.440329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.440362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.440542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.440573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.440830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.440862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.441000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.441032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.441225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.441258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.441468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.441500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.441709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.441741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.441996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.442028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.442314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.442347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.442485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.442517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.442701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.442778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.443013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.443051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.443318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.443351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.226 [2024-12-05 20:49:57.443612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.226 [2024-12-05 20:49:57.443644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.226 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.443787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.443820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.444019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.444051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.444365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.444397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.444662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.444695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.444820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.444852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.445071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.445105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.445342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.445374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.445686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.445719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.445986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.446018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.446269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.446313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.446468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.446500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.446781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.446813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.447101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.447135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.447345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.447377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.447636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.447669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.447899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.447932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.448132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.448165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.448392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.448425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.448609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.448641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.448861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.448894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.449080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.449114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.449400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.449432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.449554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.449587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.449858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.449891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.450144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.450177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.450322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.450354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.450628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.450660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.450859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.450891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.451156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.451189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.451410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.451443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.451751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.451783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.451920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.451952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.452088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.452122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.452425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.452457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.452639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.452671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.452876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.452909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.453096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.453130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.453331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.453363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.453640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.453673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.453967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.453998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.454228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.454264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.454470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.454503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.454699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.227 [2024-12-05 20:49:57.454733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.227 qpair failed and we were unable to recover it. 00:30:04.227 [2024-12-05 20:49:57.454992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.455025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.455217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.455250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.455459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.455491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.455767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.455800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.456047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.456095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.456292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.456324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.456608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.456645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.456963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.456996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.457309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.457344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.457705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.457737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.457998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.458030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.458347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.458381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.458665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.458696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.458955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.458987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.459256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.459290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.459547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.459578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.459795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.459827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.460120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.460154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.460436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.460467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.460663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.460695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.460960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.460993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.461226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.461260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.461545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.461577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.461783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.461814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.462127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.462160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.462357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.462389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.462661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.462694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.462914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.462946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.463184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.463218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.463396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.463428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.463715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.463747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.463980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.464012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.464174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.464208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.464460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.464493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.464789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.464820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.465096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.465129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.465311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.465343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.465624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.465656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.465883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.465915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.466123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.466155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.466365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.466397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.466594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.466626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.466834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.466866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.467079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.467111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.467336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.467369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.467657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.467690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.467971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.468009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.228 [2024-12-05 20:49:57.468242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.228 [2024-12-05 20:49:57.468275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.228 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.468481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.468513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.468740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.468773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.468977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.469010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.469142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.469175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.469307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.469339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.469627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.469659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.469855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.469887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.470109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.470142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.470326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.470357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.470577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.470609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.470886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.470917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.471198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.471231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.471435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.471467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.471781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.471814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.472101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.472139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.472418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.472451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.472709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.472741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.472966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.472998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.473329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.473363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.473495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.473528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.473676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.473708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.473927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.473959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.474102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.474136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.474339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.474371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.474655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.474688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.474926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.474958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.475217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.475251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.475475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.475507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.475790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.475822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.476114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.476147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.476341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.476373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.476647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.476679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.476790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.476822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.477015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.477047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.477308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.477341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.477464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.477495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.477691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.477726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.478015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.478047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.478356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.478395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.478656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.478687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.478887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.478920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.479209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.479245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.479523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.479556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.479748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.479781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.480049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.480090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.480377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.480409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.480714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.480746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.229 [2024-12-05 20:49:57.480931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.229 [2024-12-05 20:49:57.480963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.229 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.481146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.481180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.481323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.481355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.481546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.481578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.481787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.481818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.482021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.482052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.482356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.482389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.482602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.482634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.482996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.483028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.483366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.483398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.483602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.483634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.483860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.483892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.484094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.484128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.484327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.484360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.484645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.484677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.484956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.484988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.485224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.485258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.485458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.485489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.485775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.485809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.486026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.486064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.486256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.486289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.486551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.486583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.486822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.486854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.487101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.487135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.487317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.487350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.487489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.487521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.487788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.487820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.488024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.488056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.488294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.488325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.488439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.488471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.488771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.488803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.488992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.489031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.489229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.489261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.489522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.489553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.489836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.489868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.490158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.490191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.490420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.490452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.490659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.490691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.490951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.490984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.491215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.491248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.491476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.491508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.491714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.491746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.491932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.491964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.492247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.492280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.230 [2024-12-05 20:49:57.492487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.230 [2024-12-05 20:49:57.492518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.230 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.492730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.492762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.493079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.493112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.493393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.493425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.493769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.493801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.493985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.494017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.494235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.494269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.494512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.494543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.494770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.494803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.495087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.495120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.495376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.495408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.495653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.495685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.495966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.495998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.496283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.496317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.496548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.496591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.496827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.496860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.497046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.497089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.497370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.497402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.497612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.497644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.497952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.497985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.498219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.498252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.498438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.498470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.498656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.498689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.498970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.499002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.499136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.499169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.499452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.499484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.499769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.499801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.499940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.499972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.500243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.500277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.500481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.500514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.500707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.500739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.500995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.501027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.501241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.501273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.501479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.501511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.501771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.501802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.502013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.502045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.502312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.502344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.502580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.502611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.502930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.502963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.503306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.503339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.503562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.503595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.503794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.503827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.503955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.503986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.504285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.504319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.504519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.504551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.504814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.504845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.505125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.505158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.505456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.505488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.231 qpair failed and we were unable to recover it. 00:30:04.231 [2024-12-05 20:49:57.505739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.231 [2024-12-05 20:49:57.505770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.506081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.506114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.506299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.506332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.506604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.506636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.506985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.507017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.507296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.507329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.507512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.507551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.507769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.507801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.508087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.508120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.508382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.508414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.508716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.508749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.509067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.509100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.509364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.509396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.509723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.509754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.510038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.510078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.510357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.510388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.510677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.510709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.510934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.510965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.511197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.511230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.511488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.511520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.511827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.511859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.512088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.512122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.512334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.512366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.512684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.512715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.512980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.513012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.513234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.513267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.513500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.513532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.513729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.513761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.513962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.513994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.514142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.514175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.514454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.514485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.514685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.514717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.514833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.514865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.515081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.515115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.515297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.515329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.515456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.515488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.515751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.515783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.515979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.516012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.516231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.516264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.516447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.516478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.516676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.516708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.516968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.517000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.517196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.517229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.517452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.517484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.517746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.517778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.517975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.518006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.518274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.518313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.518499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.518530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.232 [2024-12-05 20:49:57.518715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.232 [2024-12-05 20:49:57.518747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.232 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.518874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.518906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.519027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.519067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.519283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.519315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.519515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.519545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.519838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.519869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.520052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.520095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.520301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.520334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.520467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.520499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.520742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.520782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.520969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.521002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.521298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.521332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.521573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.521606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.521738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.521771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.522048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.522094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.522281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.522313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.522517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.522549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.522831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.522864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.523077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.523109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.523398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.523429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.523661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.523694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.523903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.523935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.524221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.524255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.524454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.524485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.524687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.524720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.524959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.524992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.525110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.525142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.525354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.525386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.525674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.525705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.525829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.525861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.526080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.526112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.526295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.526327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.526463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.526494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.526724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.526756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.527042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.527082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.527221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.527253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.527452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.527485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.527690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.527723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.527945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.527984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.528298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.528332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.528533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.528564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.528844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.528877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.529167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.529201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.529455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.529486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.529741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.529773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.529959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.529990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.530192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.530224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.530421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.233 [2024-12-05 20:49:57.530453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.233 qpair failed and we were unable to recover it. 00:30:04.233 [2024-12-05 20:49:57.530711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.530743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.530938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.530970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.531254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.531287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.531501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.531534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.531851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.531883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.532005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.532038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.532289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.532322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.532517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.532549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.532665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.532697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.532821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.532853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.533140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.533174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.533394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.533426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.533550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.533581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.533719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.533751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.533878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.533911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.534109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.534141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.534397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.534429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.534638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.534671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.534890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.534922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.535117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.535150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.535267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.535299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.535556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.535588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.535773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.535805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.535998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.536030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.536090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ad540 (9): Bad file descriptor 00:30:04.234 [2024-12-05 20:49:57.536546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.536622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.536902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.536939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.537147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.537184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.537386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.537419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.537672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.537704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.537994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.538026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.538255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.538288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.538411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.538444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.538728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.538759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.538880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.538911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.539191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.539223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.539440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.539471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.539712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.539744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.539958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.539990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.540251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.540283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.540561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.540593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.540723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.540755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.540894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.540925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.234 [2024-12-05 20:49:57.541202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.234 [2024-12-05 20:49:57.541235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.234 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.541410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.541483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.541816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.541852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.542076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.542110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.542309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.542341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.542597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.542629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.542762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.542794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.542976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.543008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.543318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.543352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.543497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.543529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.543659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.543690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.543877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.543908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.544219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.544253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.544454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.544486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.544693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.544725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.545048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.545090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.545209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.545241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.545436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.545468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.545738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.545770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.545990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.546022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.546311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.546344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.546541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.546572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.546768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.546799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.546989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.547021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.547157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.547189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.547369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.547401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.547588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.547619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.547899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.547931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.548186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.548227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.548439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.548471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.548722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.548754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.548898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.548930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.549186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.549218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.549429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.549462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.549589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.549621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.549908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.549939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.550222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.550255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.550450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.550482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.550732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.550764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.550986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.551018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.551283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.551317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.551591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.551624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.551894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.551926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.552121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.552155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.552435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.552467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.552727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.552759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.552960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.552992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.553119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.553152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.553275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.553307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.235 qpair failed and we were unable to recover it. 00:30:04.235 [2024-12-05 20:49:57.553434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.235 [2024-12-05 20:49:57.553465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.553573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.553603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.553784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.553815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.554005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.554036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.554232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.554265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.554482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.554514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.554772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.554810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.555093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.555127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.555263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.555294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.555503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.555535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.555669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.555701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.555953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.555985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.556167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.556199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.556399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.556430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.556618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.556649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.556824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.556855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.556992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.557024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.557312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.557345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.557539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.557570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.557708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.557740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.557880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.557912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.558169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.558202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.558402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.558434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.558710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.558741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.558951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.558982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.559169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.559202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.559411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.559443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.559646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.559678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.559881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.559912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.560109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.560142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.560319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.560351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.560556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.560586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.560707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.560739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.560923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.560954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.561141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.561173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.561479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.561511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.561645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.561677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.561899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.561931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.562228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.562260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.562396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.562428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.562645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.562677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.562867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.562899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.563099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.563132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.563325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.563356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.563641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.563673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.564025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.564068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.564214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.564246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.564435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.564468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.564656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.236 [2024-12-05 20:49:57.564688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.236 qpair failed and we were unable to recover it. 00:30:04.236 [2024-12-05 20:49:57.564904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.564935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.565073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.565106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.565293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.565325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.565462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.565494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.565698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.565730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.565870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.565902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.566176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.566209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.566342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.566374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.566645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.566676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.566923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.566955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.567194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.567227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.567413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.567445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.567631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.567663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.567860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.567892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.568095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.568127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.568327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.568359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.568607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.568639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.568916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.568948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.569124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.569157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.569295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.569328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.569527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.569559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.569769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.569801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.569998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.570029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.570246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.570278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.570540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.570573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.570770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.570814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.570944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.570976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.571163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.571196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.571377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.571408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.571680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.571711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.572027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.572068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.572267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.572298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.572474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.572506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.572709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.572741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.572962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.572994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.573115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.573147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.573340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.573371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.573510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.573541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.573658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.573690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.573961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.573992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.574171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.574203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.574484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.574515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.574691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.574722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.574973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.575003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.575223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.575255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.575441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.575472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.575667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.575698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.575841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.575872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.237 [2024-12-05 20:49:57.576121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.237 [2024-12-05 20:49:57.576154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.237 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.576337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.576369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.576493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.576524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.576805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.576837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.577111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.577150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.577289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.577320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.577507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.577539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.577738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.577771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.578028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.578067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.578257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.578289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.578528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.578560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.578697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.578728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.578905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.578936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.579198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.579231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.579533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.579565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.579737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.579769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.579943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.579974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.580269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.580302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.580503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.580536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.580710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.580741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.581017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.581049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.581269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.581301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.581496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.581527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.581717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.581748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.581940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.581972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.582107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.582140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.582424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.582456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.582643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.582674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.582940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.582971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.583251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.583285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.583553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.583584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.583716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.583752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.583882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.583914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.584094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.584127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.584383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.584415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.584691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.584723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.584960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.584991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.585176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.585208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.585395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.585427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.585639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.585671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.585845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.585898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.586088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.586121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.586336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.586367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.586572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.586603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.238 [2024-12-05 20:49:57.586729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.238 [2024-12-05 20:49:57.586761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.238 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.586940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.586971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.587167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.587200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.587373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.587405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.587592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.587624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.587898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.587930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.588143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.588177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.588365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.588397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.588514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.588545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.588728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.588759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.589035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.589074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.589338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.589370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.589505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.589535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.589667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.589698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.589826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.589857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.590056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.590098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.590392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.590424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.590695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.590727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.590943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.590974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.591187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.591220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.591422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.591453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.591634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.591666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.591943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.591975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.592098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.592130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.592350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.592382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.592657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.592688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.592964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.592995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.593108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.593140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.593372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.593405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.593511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.593541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.593665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.593697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.593879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.593910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.594159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.594191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.594466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.594497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.594670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.594701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.594909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.594940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.595242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.595274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.595518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.595550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.595691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.595723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.595958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.595990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.596271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.596303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.596492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.596523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.596798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.596829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.597087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.597118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.597237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.597269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.597413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.597443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.597616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.597648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.597844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.597875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.598048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.239 [2024-12-05 20:49:57.598089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.239 qpair failed and we were unable to recover it. 00:30:04.239 [2024-12-05 20:49:57.598261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.598292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.598592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.598623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.598796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.598828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.599024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.599055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.599317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.599350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.599536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.599568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.599740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.599776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.600076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.600109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.600215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.600246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.600424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.600456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.600642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.600672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.600848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.600879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.601094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.601126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.601314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.601345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.601641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.601672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.601854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.601885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.602127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.602160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.602351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.602383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.602560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.602592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.602866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.602898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.603029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.603067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.603181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.603213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.603382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.603413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.603680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.603711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.603953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.603984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.604163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.604195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.604369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.604400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.604649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.604681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.604808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.604839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.605012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.605044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.605270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.605303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.605417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.605448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.605601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.605633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.605897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.605934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.606122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.606155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.606429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.606460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.606594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.606625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.606840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.606872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.607056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.607122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.607371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.607403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.607575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.607606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.607807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.607838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.608112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.608145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.608339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.608371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.608615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.608646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.608819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.608851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.609139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.609172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.609482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.609515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.240 [2024-12-05 20:49:57.609633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.240 [2024-12-05 20:49:57.609663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.240 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.609850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.609881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.610176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.610209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.610382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.610414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.610608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.610638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.610827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.610859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.611133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.611164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.611268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.611299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.611541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.611572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.611761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.611792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.611975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.612006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.612199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.612231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.612503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.612535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.612793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.612825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.612944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.612975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.613168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.613201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.613388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.613419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.613620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.613652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.613765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.613797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.614011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.614042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.614229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.614261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.614439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.614470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.614669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.614700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.614868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.614900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.615175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.615208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.615473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.615504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.615723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.615755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.615956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.615988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.616187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.616218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.616414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.616462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.616650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.616681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.616781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.616812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.616935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.616966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.617137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.617171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.617344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.617375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.617617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.617648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.617833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.617864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.618077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.618109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.618354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.618386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.618561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.618593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.618894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.618925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.619172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.619204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.619348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.619379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.619492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.619523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.241 qpair failed and we were unable to recover it. 00:30:04.241 [2024-12-05 20:49:57.619713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.241 [2024-12-05 20:49:57.619744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.619933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.619964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.620137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.620169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.620441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.620474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.620729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.620760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.621052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.621091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.621335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.621366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.621499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.621531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.621773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.621804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.622016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.622055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.622358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.622389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.622509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.622541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.622713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.622744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.622929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.622961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.623081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.623114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.623386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.623417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.623614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.623646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.623887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.623917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.624095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.624127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.624301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.624332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.624448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.624478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.624690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.624722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.624891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.624922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.625119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.625151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.625337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.625369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.625586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.625618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.625876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.625909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.626166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.626199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.626370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.626402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.626612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.626643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.626775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.626806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.627087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.627120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.627227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.627258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.627432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.627463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.627589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.627621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.627799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.627830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.628001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.628038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.628330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.628362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.628547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.628579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.628754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.628784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.628980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.629011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.629233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.629265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.629447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.629478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.629729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.629761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.629883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.629914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.630211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.630243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.630347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.630378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.630543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.630573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.630874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.242 [2024-12-05 20:49:57.630906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.242 qpair failed and we were unable to recover it. 00:30:04.242 [2024-12-05 20:49:57.631156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.631188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.631489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.631521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.631787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.631818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.632008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.632039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.632257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.632290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.632476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.632506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.632680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.632711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.632842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.632873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.633173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.633205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.633313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.633344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.633517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.633549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.633724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.633755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.633944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.633975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.634079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.634112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.634329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.634367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.634633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.634664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.634849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.634880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.635099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.635131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.635242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.635274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.635486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.635517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.635761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.635791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.635962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.635993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.636183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.636215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.636333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.636364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.636541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.636572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.636815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.636846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.636969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.637001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.637124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.637158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.637402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.637433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.637613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.637644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.637764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.637795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.637968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.638000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.638140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.638172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.638443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.638474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.638644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.638675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.638890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.638921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.639133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.639166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.639425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.639457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.639654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.639685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.639883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.639914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.640089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.640120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.640437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.640468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.640732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.640765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.640906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.640937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.641183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.641215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.641389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.641421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.641611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.641642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.641890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.641921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.243 [2024-12-05 20:49:57.642124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.243 [2024-12-05 20:49:57.642157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.243 qpair failed and we were unable to recover it. 00:30:04.244 [2024-12-05 20:49:57.642336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.244 [2024-12-05 20:49:57.642368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.244 qpair failed and we were unable to recover it. 00:30:04.244 [2024-12-05 20:49:57.642538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.244 [2024-12-05 20:49:57.642569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.244 qpair failed and we were unable to recover it. 00:30:04.244 [2024-12-05 20:49:57.642680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.244 [2024-12-05 20:49:57.642711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.244 qpair failed and we were unable to recover it. 00:30:04.244 [2024-12-05 20:49:57.642818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.244 [2024-12-05 20:49:57.642850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.244 qpair failed and we were unable to recover it. 00:30:04.244 [2024-12-05 20:49:57.643039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.244 [2024-12-05 20:49:57.643082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.244 qpair failed and we were unable to recover it. 00:30:04.244 [2024-12-05 20:49:57.643322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.244 [2024-12-05 20:49:57.643354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.244 qpair failed and we were unable to recover it. 00:30:04.244 [2024-12-05 20:49:57.643566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.244 [2024-12-05 20:49:57.643599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.244 qpair failed and we were unable to recover it. 00:30:04.244 [2024-12-05 20:49:57.643861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.244 [2024-12-05 20:49:57.643891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.244 qpair failed and we were unable to recover it. 00:30:04.244 [2024-12-05 20:49:57.644004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.244 [2024-12-05 20:49:57.644036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.244 qpair failed and we were unable to recover it. 00:30:04.521 [2024-12-05 20:49:57.644247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.521 [2024-12-05 20:49:57.644282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.521 qpair failed and we were unable to recover it. 00:30:04.521 [2024-12-05 20:49:57.644484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.521 [2024-12-05 20:49:57.644517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.521 qpair failed and we were unable to recover it. 00:30:04.521 [2024-12-05 20:49:57.644664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.521 [2024-12-05 20:49:57.644695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.521 qpair failed and we were unable to recover it. 00:30:04.521 [2024-12-05 20:49:57.644991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.521 [2024-12-05 20:49:57.645023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.521 qpair failed and we were unable to recover it. 00:30:04.521 [2024-12-05 20:49:57.645301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.521 [2024-12-05 20:49:57.645334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.521 qpair failed and we were unable to recover it. 00:30:04.521 [2024-12-05 20:49:57.645454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.521 [2024-12-05 20:49:57.645485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.521 qpair failed and we were unable to recover it. 00:30:04.521 [2024-12-05 20:49:57.645603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.521 [2024-12-05 20:49:57.645635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.521 qpair failed and we were unable to recover it. 00:30:04.521 [2024-12-05 20:49:57.645865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.521 [2024-12-05 20:49:57.645895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.521 qpair failed and we were unable to recover it. 00:30:04.521 [2024-12-05 20:49:57.646105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.521 [2024-12-05 20:49:57.646138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.521 qpair failed and we were unable to recover it. 00:30:04.521 [2024-12-05 20:49:57.646273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.521 [2024-12-05 20:49:57.646305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.521 qpair failed and we were unable to recover it. 00:30:04.521 [2024-12-05 20:49:57.646550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.521 [2024-12-05 20:49:57.646581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.521 qpair failed and we were unable to recover it. 00:30:04.521 [2024-12-05 20:49:57.646692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.521 [2024-12-05 20:49:57.646724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.521 qpair failed and we were unable to recover it. 00:30:04.521 [2024-12-05 20:49:57.647022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.521 [2024-12-05 20:49:57.647053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.521 qpair failed and we were unable to recover it. 00:30:04.521 [2024-12-05 20:49:57.647240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.521 [2024-12-05 20:49:57.647272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.521 qpair failed and we were unable to recover it. 00:30:04.521 [2024-12-05 20:49:57.647454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.521 [2024-12-05 20:49:57.647485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.521 qpair failed and we were unable to recover it. 00:30:04.521 [2024-12-05 20:49:57.647754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.521 [2024-12-05 20:49:57.647785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.521 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.647955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.647987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.648119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.648152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.648333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.648366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.648485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.648514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.648625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.648657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.648825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.648857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.649040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.649080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.649191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.649222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.649422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.649459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.649571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.649602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.649776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.649808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.650012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.650044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.650182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.650213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.650401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.650432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.650563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.650595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.650723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.650755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.650937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.650968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.651142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.651175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.651361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.651392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.651568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.651599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.651815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.651847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.652093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.652125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.652312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.652344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.652526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.652557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.652679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.652710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.652842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.652873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.653133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.653165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.653440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.653471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.653740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.653770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.653906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.653938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.654134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.654167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.654357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.654388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.654508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.654539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.654710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.654741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.654918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.654949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.655151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.522 [2024-12-05 20:49:57.655188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.522 qpair failed and we were unable to recover it. 00:30:04.522 [2024-12-05 20:49:57.655379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.655410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.655598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.655629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.655806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.655837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.656048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.656088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.656282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.656313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.656447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.656478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.656601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.656632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.656752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.656782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.657048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.657088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.657217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.657248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.657374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.657405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.657519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.657550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.657795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.657827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.657964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.657995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.658110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.658143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.658336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.658367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.658469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.658500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.658759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.658789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.658964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.658995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.659120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.659152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.659397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.659428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.659631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.659662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.659929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.659960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.660139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.660171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.660359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.660391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.660561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.660591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.660826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.660858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.661040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.661080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.661275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.661307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.661518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.661549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.661745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.661777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.661995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.662026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.662160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.662192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.662377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.662408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.662595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.662626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.662870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.662901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.523 [2024-12-05 20:49:57.663086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.523 [2024-12-05 20:49:57.663118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.523 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.663305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.663336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.663599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.663630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.663765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.663796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.663934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.663965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.664084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.664115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.664309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.664339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.664507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.664537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.664722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.664753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.664962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.664993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.665180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.665213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.665398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.665429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.665563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.665594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.665716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.665747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.665965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.665997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.666186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.666234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.666408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.666439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.666610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.666641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.666857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.666889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.667069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.667101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.667295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.667326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.667453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.667483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.667608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.667639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.667811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.667842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.668091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.668122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.668372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.668404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.668578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.668608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.668789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.668820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.669041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.669082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.669221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.669252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.669456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.669486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.669785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.669821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.670018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.670050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.670345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.670376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.670648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.670680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.670924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.670956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.524 qpair failed and we were unable to recover it. 00:30:04.524 [2024-12-05 20:49:57.671152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.524 [2024-12-05 20:49:57.671184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.671371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.671403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.671588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.671618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.671790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.671822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.671949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.671980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.672225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.672258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.672474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.672505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.672689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.672721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.672904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.672935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.673174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.673206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.673411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.673442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.673612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.673643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.673939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.673970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.674242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.674274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.674467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.674498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.674681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.674712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.674895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.674928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.675121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.675153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.675290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.675321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.675523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.675555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.675762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.675793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.676052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.676095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.676198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.676235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.676508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.676539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.676650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.676681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.676895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.676927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.677142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.677175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.677298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.677330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.677535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.677566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.677810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.677841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.678043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.678102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.678237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.678268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.678467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.678497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.678781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.678813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.678946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.678977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.679083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.679114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.679246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.525 [2024-12-05 20:49:57.679278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.525 qpair failed and we were unable to recover it. 00:30:04.525 [2024-12-05 20:49:57.679450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.679481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.526 [2024-12-05 20:49:57.679698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.679730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.526 [2024-12-05 20:49:57.679841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.679871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.526 [2024-12-05 20:49:57.680112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.680144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.526 [2024-12-05 20:49:57.680356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.680388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.526 [2024-12-05 20:49:57.680556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.680586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.526 [2024-12-05 20:49:57.680761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.680792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.526 [2024-12-05 20:49:57.680969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.681000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.526 [2024-12-05 20:49:57.681271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.681304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.526 [2024-12-05 20:49:57.681481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.681513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.526 [2024-12-05 20:49:57.681699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.681731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.526 [2024-12-05 20:49:57.681918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.681949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.526 [2024-12-05 20:49:57.682134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.682172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.526 [2024-12-05 20:49:57.682284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.682316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.526 [2024-12-05 20:49:57.682489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.682521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.526 [2024-12-05 20:49:57.682788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.682820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.526 [2024-12-05 20:49:57.683089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.683120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.526 [2024-12-05 20:49:57.683303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.683334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.526 [2024-12-05 20:49:57.683590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.683621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.526 [2024-12-05 20:49:57.683917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.683948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.526 [2024-12-05 20:49:57.684125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.684157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.526 [2024-12-05 20:49:57.684276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.684308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.526 [2024-12-05 20:49:57.684551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.684581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.526 [2024-12-05 20:49:57.684716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.684747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.526 [2024-12-05 20:49:57.684938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.684969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.526 [2024-12-05 20:49:57.685096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.685128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.526 [2024-12-05 20:49:57.685394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.685464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.526 [2024-12-05 20:49:57.685705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.685740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.526 [2024-12-05 20:49:57.685963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.685995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.526 [2024-12-05 20:49:57.686230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.686264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.526 [2024-12-05 20:49:57.686480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.686511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.526 [2024-12-05 20:49:57.686694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.526 [2024-12-05 20:49:57.686725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.526 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.686853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.686884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.687054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.687096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.687236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.687267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.687478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.687509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.687690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.687721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.687911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.687941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.688141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.688172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.688414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.688453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.688671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.688706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.688894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.688926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.689119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.689151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.689267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.689298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.689567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.689598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.689864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.689894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.690088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.690120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.690385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.690416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.690547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.690578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.690854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.690885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.691156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.691188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.691470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.691502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.691757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.691788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.691912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.691943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.692127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.692159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.692291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.692322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.692590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.692622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.692734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.692766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.693043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.693087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.693260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.693291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.693493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.693525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.693651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.693682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.693976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.694007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.694286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.694319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.694452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.694484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.694738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.694769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.694957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.694989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.527 [2024-12-05 20:49:57.695175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.527 [2024-12-05 20:49:57.695208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.527 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.695424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.695456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.695650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.695681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.695795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.695826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.695946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.695978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.696085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.696117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.696306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.696338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.696457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.696488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.696729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.696760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.696887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.696918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.697032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.697072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.697340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.697371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.697492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.697529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.697785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.697817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.698084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.698115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.698335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.698366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.698478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.698510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.698681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.698712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.698970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.699001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.699267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.699300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.699474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.699505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.699680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.699711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.699950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.699982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.700229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.700261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.700439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.700471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.700597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.700629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.700847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.700879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.701135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.701167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.701381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.701411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.701595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.701627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.701750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.701782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.701982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.702013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.702290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.702322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.702513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.702544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.702717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.702748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.702916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.702947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.528 [2024-12-05 20:49:57.703154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.528 [2024-12-05 20:49:57.703185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.528 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.703394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.703425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.703617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.703648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.703840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.703873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.704041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.704092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.704371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.704402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.704592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.704623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.704813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.704844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.705091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.705123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.705396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.705427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.705646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.705677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.705809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.705840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.706008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.706038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.706335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.706367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.706609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.706639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.706839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.706870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.707115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.707153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.707364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.707395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.707577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.707608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.707853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.707884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.708099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.708133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.708311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.708343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.708608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.708639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.708832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.708864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.708992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.709023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.709225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.709259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.709441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.709472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.709648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.709678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.709975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.710006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.710199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.710232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.710355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.710387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.710683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.710714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.710927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.710958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.711248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.711281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.711525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.711556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.711819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.711850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.712019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.529 [2024-12-05 20:49:57.712050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.529 qpair failed and we were unable to recover it. 00:30:04.529 [2024-12-05 20:49:57.712229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.712261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.712553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.712585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.712806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.712837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.713102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.713134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.713346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.713377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.713577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.713608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.713722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.713754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.713932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.713963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.714150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.714183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.714316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.714347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.714527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.714558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.714796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.714827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.715011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.715043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.715306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.715338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.715510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.715541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.715792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.715824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.715994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.716025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.716263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.716296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.716546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.716577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.716700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.716737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.716850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.716882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.717071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.717105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.717291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.717322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.717534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.717566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.717686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.717718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.717896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.717928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.718145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.718178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.718372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.718404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.718674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.718705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.718892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.718923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.719124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.719157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.719349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.719380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.719557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.719588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.719701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.719733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.719915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.719947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.720138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.530 [2024-12-05 20:49:57.720171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.530 qpair failed and we were unable to recover it. 00:30:04.530 [2024-12-05 20:49:57.720290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.720321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.720507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.720539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.720810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.720841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.721039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.721081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.721194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.721227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.721420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.721451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.721647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.721679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.721855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.721886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.722086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.722119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.722300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.722332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.722524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.722556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.722689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.722720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.722841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.722873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.723092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.723125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.723320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.723351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.723466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.723497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.723676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.723707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.723902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.723934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.724123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.724155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.724374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.724406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.724515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.724545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.724834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.724866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.725040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.725081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.725268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.725305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.725500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.725532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.725720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.725751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.725860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.725892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.726102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.726134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.726376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.726407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.726598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.726629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.726814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.726846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.727090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.531 [2024-12-05 20:49:57.727123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.531 qpair failed and we were unable to recover it. 00:30:04.531 [2024-12-05 20:49:57.727312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.727343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.727472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.727503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.727672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.727703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.727915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.727946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.728191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.728224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.728418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.728449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.728714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.728746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.728946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.728977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.729110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.729142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.729355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.729387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.729595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.729626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.729795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.729826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.730007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.730038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.730333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.730366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.730490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.730521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.730790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.730820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.731029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.731072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.731187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.731219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.731543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.731613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.731911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.731946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.732144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.732178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.732365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.732397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.732583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.732615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.732813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.732844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.733122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.733154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.733271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.733304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.733523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.733554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.733754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.733785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.734024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.734055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.734341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.734373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.734586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.734618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.734733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.734765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.735075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.735108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.735282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.735314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.735428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.735460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.532 [2024-12-05 20:49:57.735729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.532 [2024-12-05 20:49:57.735760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.532 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.735945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.735977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.736253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.736287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.736473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.736505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.736692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.736724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.736901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.736932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.737043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.737085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.737330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.737362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.737532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.737563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.737804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.737836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.738113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.738146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.738417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.738448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.738567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.738598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.738811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.738843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.739112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.739143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.739357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.739389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.739588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.739619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.739809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.739840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.739959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.739990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.740236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.740269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.740382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.740413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.740679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.740711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.740928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.740961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.741092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.741129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.741239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.741271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.741386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.741418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.741691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.741722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.741944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.741975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.742186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.742219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.742392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.742424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.742708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.742739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.742934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.742965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.743155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.743188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.743378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.743411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.743655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.743685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.743814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.743846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.533 [2024-12-05 20:49:57.744079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.533 [2024-12-05 20:49:57.744112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.533 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.744244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.744275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.744572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.744603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.744780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.744811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.744924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.744955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.745130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.745163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.745297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.745329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.745529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.745560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.745735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.745767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.745871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.745903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.746003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.746034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.746311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.746344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.746485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.746516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.746688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.746719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.746909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.746942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.747075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.747107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.747292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.747323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.747495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.747526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.747856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.747887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.748129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.748160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.748367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.748397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.748589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.748620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.748747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.748779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.748991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.749022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.749166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.749199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.749392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.749424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.749616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.749647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.749923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.749959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.750086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.750120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.750291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.750322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.750533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.750564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.750684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.750716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.750957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.750989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.751106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.751138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.751250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.751281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.751401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.751433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.534 qpair failed and we were unable to recover it. 00:30:04.534 [2024-12-05 20:49:57.751552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.534 [2024-12-05 20:49:57.751583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.751828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.751860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.752035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.752075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.752251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.752282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.752408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.752439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.752640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.752672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.752887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.752919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.753111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.753143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.753320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.753351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.753457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.753489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.753613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.753644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.753912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.753944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.754128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.754161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.754405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.754436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.754689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.754721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.754907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.754939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.755141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.755172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.755376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.755407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.755672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.755703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.755810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.755841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.756031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.756068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.756189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.756221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.756419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.756450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.756651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.756683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.756856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.756888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.757103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.757137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.757313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.757345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.757533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.757565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.757782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.757813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.758083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.758115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.758252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.758284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.758405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.758442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.758684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.535 [2024-12-05 20:49:57.758716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.535 qpair failed and we were unable to recover it. 00:30:04.535 [2024-12-05 20:49:57.758891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.758921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.759092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.759124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.759367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.759399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.759575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.759607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.759733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.759764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.759975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.760005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.760204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.760236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.760481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.760511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.760689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.760720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.760905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.760936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.761221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.761254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.761433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.761465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.761666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.761697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.761876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.761907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.762118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.762150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.762335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.762366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.762634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.762665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.762912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.762943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.763115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.763148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.763418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.763450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.763573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.763603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.763777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.763808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.764006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.764037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.764287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.764319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.764592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.764623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.764736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.764768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.765019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.765050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.765301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.765333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.765528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.765560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.765802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.765833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.766044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.766087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.766210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.766242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.766353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.766385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.766499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.766532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.766707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.766739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.767000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.767031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.536 qpair failed and we were unable to recover it. 00:30:04.536 [2024-12-05 20:49:57.767252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.536 [2024-12-05 20:49:57.767285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.767536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.767567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.767834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.767872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.767982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.768013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.768265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.768298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.768471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.768503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.768773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.768804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.768991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.769022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.769183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.769216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.769421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.769453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.769575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.769607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.769876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.769908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.770103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.770150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.770450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.770481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.770668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.770700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.770918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.770950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.771152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.771184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.771395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.771426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.771717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.771749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.772034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.772074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.772248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.772280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.772464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.772496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.772688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.772719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.772991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.773023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.773239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.773272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.773398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.773430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.773620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.773652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.773882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.773912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.774184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.774217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.774400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.774432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.774559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.774590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.774765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.774796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.774993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.775024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.775133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.775166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.775339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.775370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.775567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.775599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.537 [2024-12-05 20:49:57.775786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.537 [2024-12-05 20:49:57.775818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.537 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.775988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.776019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.776238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.776270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.776374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.776406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.776518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.776549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.776725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.776757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.776943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.776981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.777229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.777261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.777458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.777489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.777673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.777705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.777890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.777922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.778094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.778127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.778386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.778418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.778727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.778759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.779004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.779035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.779245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.779277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.779393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.779425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.779610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.779642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.779824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.779855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.780040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.780081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.780382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.780414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.780530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.780562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.780806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.780838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.781013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.781044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.781262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.781294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.781565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.781597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.781805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.781836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.781948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.781979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.782258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.782292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.782487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.782518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.782716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.782748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.782934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.782966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.783151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.783183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.783363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.783395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.783607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.783638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.783829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.783862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.538 [2024-12-05 20:49:57.783988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.538 [2024-12-05 20:49:57.784019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.538 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.784208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.784240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.784353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.784384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.784628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.784660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.784902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.784934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.785105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.785138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.785267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.785299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.785497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.785529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.785739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.785771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.785974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.786005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.786227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.786267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.786381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.786412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.786704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.786736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.786959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.786989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.787260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.787293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.787426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.787457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.787677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.787708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.787894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.787925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.788037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.788078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.788249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.788280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.788454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.788486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.788669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.788701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.788821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.788852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.789131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.789163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.789345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.789376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.789586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.789618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.789799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.789831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.790076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.790109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.790283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.790315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.790514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.790545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.790829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.790860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.791072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.791105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.791322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.791353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.791534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.791565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.791749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.791780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.791953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.791984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.539 qpair failed and we were unable to recover it. 00:30:04.539 [2024-12-05 20:49:57.792169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.539 [2024-12-05 20:49:57.792201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.792457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.792490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.792667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.792699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.792905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.792936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.793198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.793230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.793446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.793477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.793581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.793613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.793885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.793916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.794094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.794127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.794297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.794329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.794573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.794604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.794777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.794808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.794930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.794960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.795137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.795170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.795458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.795495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.795705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.795737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.795921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.795952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.796154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.796185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.796478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.796510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.796775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.796807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.796975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.797006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.797185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.797217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.797399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.797431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.797600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.797632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.797874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.797905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.798114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.798146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.798339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.798371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.798548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.798580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.798778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.798810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.799104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.799135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.799352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.799384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.799654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.799685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.799875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.540 [2024-12-05 20:49:57.799907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.540 qpair failed and we were unable to recover it. 00:30:04.540 [2024-12-05 20:49:57.800082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.800115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.800330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.800361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.800549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.800581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.800811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.800843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.801013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.801045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.801329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.801362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.801561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.801592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.801856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.801887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.802007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.802039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.802220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.802252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.802476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.802509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.802693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.802724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.802842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.802872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.803142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.803174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.803419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.803452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.803590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.803621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.803795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.803826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.803940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.803971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.804086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.804118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.804306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.804338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.804521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.804553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.804678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.804715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.804928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.804960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.805150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.805183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.805443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.805475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.805584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.805616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.805741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.805772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.805996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.806028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.806175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.806210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.806393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.806424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.806695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.806727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.806900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.806932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.807122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.807155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.807343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.807375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.807552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.807584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.807770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.541 [2024-12-05 20:49:57.807802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.541 qpair failed and we were unable to recover it. 00:30:04.541 [2024-12-05 20:49:57.807905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.807937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.808150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.808183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.808367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.808399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.808518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.808550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.808673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.808704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.808888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.808919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.809087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.809119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.809237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.809269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.809515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.809547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.809680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.809710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.809886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.809918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.810099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.810130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.810291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.810363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.810589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.810625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.810873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.810905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.811084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.811118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.811293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.811323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.811531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.811562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.811735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.811767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.812030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.812083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.812256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.812289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.812460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.812491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.812679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.812710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.812920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.812951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.813132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.813164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.813336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.813376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.813503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.813534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.813739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.813771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.813896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.813927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.814183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.814216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.814409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.814440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.814568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.814599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.814893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.814924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.815125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.815156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.815341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.815373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.815647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.815678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.815805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.815836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.815952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.542 [2024-12-05 20:49:57.815983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.542 qpair failed and we were unable to recover it. 00:30:04.542 [2024-12-05 20:49:57.816090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.816121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.816377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.816409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.816616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.816647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.816890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.816922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.817048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.817089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.817365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.817395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.817584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.817616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.817884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.817915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.818084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.818116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.818326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.818357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.818543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.818574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.818692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.818723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.818967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.818998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.819191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.819222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.819413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.819445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.819634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.819666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.819848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.819879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.820093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.820125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.820318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.820348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.820465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.820496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.820673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.820705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.820896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.820927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.821102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.821134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.821310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.821340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.821551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.821581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.821783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.821815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.822004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.822035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.822259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.822297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.822414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.822445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.822568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.822598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.822783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.822813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.822937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.822967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.823151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.823184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.823308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.823338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.823450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.823481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.823654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.823684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.823875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.823905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.824172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.824204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.824392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.824424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.543 qpair failed and we were unable to recover it. 00:30:04.543 [2024-12-05 20:49:57.824633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.543 [2024-12-05 20:49:57.824664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.824848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.824880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.825068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.825101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.825233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.825264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.825451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.825483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.825653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.825684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.825907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.825938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.826050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.826092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.826294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.826325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.826510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.826540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.826711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.826742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.827011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.827043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.827183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.827215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.827478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.827509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.827762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.827793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.827919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.827951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.828203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.828235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.828522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.828552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.828725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.828756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.829045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.829098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.829222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.829253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.829380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.829410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.829531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.829562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.829738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.829769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.829883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.829914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.830025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.830056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.830243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.830273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.830494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.830525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.830640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.830678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.830857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.830888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.831008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.831038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.831222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.831254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.831434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.831466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.831647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.831678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.831910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.831942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.832152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.832185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.832356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.832386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.832500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.832532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.832741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.544 [2024-12-05 20:49:57.832772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.544 qpair failed and we were unable to recover it. 00:30:04.544 [2024-12-05 20:49:57.832974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.833005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.833148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.833180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.833425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.833457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.833654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.833686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.833875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.833907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.834164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.834196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.834311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.834342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.834514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.834545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.834718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.834750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.834884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.834916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.835105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.835136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.835315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.835347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.835611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.835643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.835896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.835928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.836100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.836131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.836322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.836353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.836546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.836578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.836854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.836885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.837104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.837138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.837406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.837437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.837553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.837584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.837716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.837748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.837960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.837990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.838160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.838193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.838364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.838395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.838580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.838611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.838890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.838921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.839118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.839150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.839265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.839296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.839479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.839516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.839767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.839798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.840048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.840091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.840314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.840346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.840558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.840589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.840770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.840802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.545 qpair failed and we were unable to recover it. 00:30:04.545 [2024-12-05 20:49:57.840988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.545 [2024-12-05 20:49:57.841019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.841160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.841193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.841380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.841411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.841683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.841715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.841911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.841942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.842132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.842165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.842402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.842433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.842575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.842606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.842719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.842750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.842951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.842982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.843162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.843194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.843306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.843338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.843519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.843550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.843673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.843705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.843879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.843910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.844092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.844123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.844316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.844348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.844642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.844673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.844945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.844977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.845159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.845191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.845388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.845420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.845561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.845592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.845797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.845827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.846045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.846088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.846360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.846392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.846570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.846601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.846887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.846918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.847185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.847218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.847404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.847436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.847721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.847753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.848021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.848054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.848247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.848278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.848521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.848553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.848678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.848709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.848974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.849005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.849204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.849237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.849477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.849509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.849640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.849672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.849857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.546 [2024-12-05 20:49:57.849889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.546 qpair failed and we were unable to recover it. 00:30:04.546 [2024-12-05 20:49:57.850077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.850110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.850228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.850260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.850430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.850461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.850640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.850672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.850883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.850914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.851038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.851081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.851203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.851233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.851500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.851532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.851775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.851806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.852115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.852148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.852282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.852314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.852582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.852613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.852728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.852760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.852974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.853006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.853205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.853237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.853410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.853441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.853683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.853714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.853970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.854001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.854181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.854213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.854388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.854419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.854613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.854644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.854773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.854804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.855045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.855103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.855218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.855250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.855519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.855550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.855667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.855698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.855833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.855865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.855992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.856022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.856209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.856242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.856455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.856486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.856653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.856684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.856867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.856899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.857093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.857126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.857238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.857269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.857453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.857484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.857736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.857768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.857915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.857947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.858141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.858173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.858368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.858400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.858668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.858698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.858890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.547 [2024-12-05 20:49:57.858921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.547 qpair failed and we were unable to recover it. 00:30:04.547 [2024-12-05 20:49:57.859039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.859096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.859362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.859392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.859510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.859542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.859758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.859789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.859958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.859988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.860175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.860207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.860323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.860355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.860472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.860503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.860778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.860809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.861002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.861033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.861262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.861293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.861542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.861573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.861830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.861862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.862081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.862114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.862391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.862422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.862594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.862625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.862746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.862778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.862912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.862943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.863146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.863179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.863398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.863429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.863647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.863679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.863867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.863904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.864092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.864123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.864253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.864285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.864530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.864562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.864732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.864764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.864874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.864905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.865176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.865207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.865448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.865480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.865688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.865719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.865936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.865968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.866143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.866175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.866355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.866386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.866560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.866591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.866835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.866867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.867086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.867119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.867430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.867462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.867731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.867763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.867933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.867965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.868106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.868137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.868327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.868359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.868540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.548 [2024-12-05 20:49:57.868572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.548 qpair failed and we were unable to recover it. 00:30:04.548 [2024-12-05 20:49:57.868817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.868849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.869030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.869067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.869181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.869211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.869407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.869438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.869723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.869754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.869870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.869901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.870025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.870068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.870315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.870345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.870533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.870564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.870764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.870795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.871092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.871124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.871396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.871427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.871562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.871593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.871895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.871927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.872118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.872149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.872358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.872390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.872502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.872534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.872646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.872677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.872795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.872827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.873075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.873114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.873315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.873346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.873461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.873493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.873678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.873708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.873897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.873929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.874213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.874245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.874367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.874399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.874587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.874618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.874825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.874856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.875105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.875138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.875381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.875412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.875700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.875731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.876007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.876039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.876229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.876260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.876381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.876414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.876596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.876627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.876737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.876769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.877051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.877093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.549 [2024-12-05 20:49:57.877337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.549 [2024-12-05 20:49:57.877369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.549 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.877543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.877574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.877816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.877847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.877962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.877992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.878185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.878219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.878460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.878491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.878749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.878780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.878972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.879003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.879152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.879185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.879383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.879414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.879582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.879613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.879745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.879777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.880019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.880051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.880241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.880272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.880459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.880491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.880693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.880724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.880913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.880944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.881200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.881232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.881529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.881562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.881829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.881860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.882033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.882072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.882274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.882305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.882571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.882608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.882885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.882917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.883161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.883193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.883386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.883418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.883543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.883575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.883839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.883871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.884003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.884034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.884218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.884250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.884424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.884455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.884709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.884740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.884925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.884956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.885071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.885104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.885371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.885403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.885597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.885628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.885888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.885919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.886104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.886136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.886310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.886342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.886534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.886566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.886775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.886805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.886982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.887014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.887218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.887251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.887421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.887452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.887693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.550 [2024-12-05 20:49:57.887724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.550 qpair failed and we were unable to recover it. 00:30:04.550 [2024-12-05 20:49:57.887987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.888019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.888239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.888271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.888456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.888487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.888676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.888707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.888988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.889019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.889217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.889249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.889520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.889551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.889761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.889792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.890033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.890074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.890374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.890405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.890677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.890707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.891004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.891035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.891303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.891334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.891537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.891567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.891693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.891724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.891991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.892023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.892217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.892249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.892490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.892526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.892713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.892745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.892936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.892967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.893214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.893247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.893495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.893526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.893797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.893828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.894020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.894051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.894244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.894275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.894516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.894547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.894739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.894771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.894943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.894974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.895095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.895128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.895236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.895267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.895437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.895468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.895659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.895692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.895797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.895827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.895953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.895985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.896185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.896218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.896400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.896431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.896606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.896638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.896814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.896846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.897030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.897070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.897202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.897234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.897502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.897533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.897706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.897737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.897859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.897890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.898159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.898192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.898442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.551 [2024-12-05 20:49:57.898473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.551 qpair failed and we were unable to recover it. 00:30:04.551 [2024-12-05 20:49:57.898748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.898778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.898964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.898994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.899185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.899216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.899337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.899367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.899551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.899582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.899780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.899811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.900103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.900134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.900377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.900409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.900652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.900683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.900895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.900927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.901131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.901162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.901411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.901443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.901647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.901684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.901876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.901908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.902080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.902113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.902326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.902358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.902632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.902663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.902907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.902939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.903145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.903176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.903382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.903415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.903603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.903634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.903768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.903799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.903983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.904014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.904293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.904325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.904500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.904531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.904721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.904752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.905015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.905047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.905229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.905261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.905443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.905473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.905602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.905634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.905749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.905780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.905968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.905999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.906260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.906292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.906474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.906506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.906642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.906672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.906778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.906809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.906932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.906962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.907089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.907121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.907390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.907422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.907620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.907652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.907858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.907890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.908097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.908129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.908324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.908356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.908634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.908664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.908802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.552 [2024-12-05 20:49:57.908834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.552 qpair failed and we were unable to recover it. 00:30:04.552 [2024-12-05 20:49:57.909002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.909034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.909340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.909371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.909559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.909590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.909834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.909866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.910044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.910083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.910329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.910360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.910489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.910520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.910644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.910681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.910894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.910925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.911190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.911223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.911413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.911444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.911549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.911580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.911790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.911822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.912004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.912034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.912266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.912298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.912486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.912517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.912720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.912752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.913002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.913033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.913245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.913277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.913394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.913425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.913526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.913557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.913738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.913769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.913980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.914011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.914230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.914262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.914470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.914502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.914750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.914781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.914976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.915007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.915219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.915251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.915526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.915558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.915755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.915786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.915961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.915992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.916115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.916148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.916390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.916421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.916615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.916645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.916895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.916928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.917168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.917200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.917329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.917360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.917572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.917604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.917859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.917890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.918070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.918102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.918313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.918343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.918586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.553 [2024-12-05 20:49:57.918617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.553 qpair failed and we were unable to recover it. 00:30:04.553 [2024-12-05 20:49:57.918788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.918819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.919086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.919117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.919337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.919370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.919489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.919519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.919644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.919676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.919943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.919981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.920165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.920197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.920410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.920442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.920719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.920750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.920863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.920895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.921100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.921133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.921388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.921421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.921611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.921642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.921828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.921859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.921973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.922004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.922199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.922232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.922400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.922431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.922541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.922572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.922815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.922846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.923123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.923155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.923417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.923449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.923744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.923775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.923909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.923941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.924130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.924162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.924348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.924379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.924512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.924544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.924845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.924877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.925051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.925091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.925339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.925370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.925619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.925650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.925890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.925921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.926043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.926083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.926280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.926312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.926585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.926617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.926857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.926888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.927079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.927110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.927283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.927315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.927609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.927640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.927815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.927846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.928104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.928136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.928304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.928335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.928529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.928560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.928670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.928701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.928837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.928869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.554 [2024-12-05 20:49:57.928987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.554 [2024-12-05 20:49:57.929019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.554 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.929211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.929249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.929431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.929462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.929638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.929669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.929798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.929829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.930001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.930034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.930229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.930261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.930374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.930407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.930582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.930613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.930874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.930906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.931110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.931142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.931410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.931441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.931566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.931597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.931811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.931843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.932047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.932100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.932380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.932411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.932600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.932630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.932839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.932871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.933046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.933089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.933378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.933408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.933676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.933708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.933901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.933933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.934107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.934140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.934413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.934445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.934636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.934668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.934841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.934873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.935070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.935103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.935235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.935267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.935507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.935577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.935844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.935879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.936086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.936121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.936425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.936457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.936650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.936681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.936801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.936832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.937016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.937047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.937178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.937209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.937401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.937432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.937684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.937716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.937919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.937950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.938140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.938173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.938356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.938388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.938625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.938667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.938860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.938891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.939018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.939049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.939180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.939211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.939319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.939351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.939631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.555 [2024-12-05 20:49:57.939661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.555 qpair failed and we were unable to recover it. 00:30:04.555 [2024-12-05 20:49:57.939782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.556 [2024-12-05 20:49:57.939815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.556 qpair failed and we were unable to recover it. 00:30:04.556 [2024-12-05 20:49:57.940000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.556 [2024-12-05 20:49:57.940031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.556 qpair failed and we were unable to recover it. 00:30:04.556 [2024-12-05 20:49:57.940260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.556 [2024-12-05 20:49:57.940292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.556 qpair failed and we were unable to recover it. 00:30:04.556 [2024-12-05 20:49:57.940398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.556 [2024-12-05 20:49:57.940429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.556 qpair failed and we were unable to recover it. 00:30:04.556 [2024-12-05 20:49:57.940645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.556 [2024-12-05 20:49:57.940677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.556 qpair failed and we were unable to recover it. 00:30:04.556 [2024-12-05 20:49:57.940935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.556 [2024-12-05 20:49:57.940966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.556 qpair failed and we were unable to recover it. 00:30:04.556 [2024-12-05 20:49:57.941213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.556 [2024-12-05 20:49:57.941246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.556 qpair failed and we were unable to recover it. 00:30:04.556 [2024-12-05 20:49:57.941453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.556 [2024-12-05 20:49:57.941486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.556 qpair failed and we were unable to recover it. 00:30:04.556 [2024-12-05 20:49:57.941687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.556 [2024-12-05 20:49:57.941718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.556 qpair failed and we were unable to recover it. 00:30:04.556 [2024-12-05 20:49:57.941959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.556 [2024-12-05 20:49:57.941990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.556 qpair failed and we were unable to recover it. 00:30:04.556 [2024-12-05 20:49:57.942187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.556 [2024-12-05 20:49:57.942219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.556 qpair failed and we were unable to recover it. 00:30:04.556 [2024-12-05 20:49:57.942505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.556 [2024-12-05 20:49:57.942537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.556 qpair failed and we were unable to recover it. 00:30:04.831 [2024-12-05 20:49:57.942705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.831 [2024-12-05 20:49:57.942738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.831 qpair failed and we were unable to recover it. 00:30:04.831 [2024-12-05 20:49:57.942949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.831 [2024-12-05 20:49:57.942982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.831 qpair failed and we were unable to recover it. 00:30:04.831 [2024-12-05 20:49:57.943159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.831 [2024-12-05 20:49:57.943192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.831 qpair failed and we were unable to recover it. 00:30:04.831 [2024-12-05 20:49:57.943305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.831 [2024-12-05 20:49:57.943337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.831 qpair failed and we were unable to recover it. 00:30:04.831 [2024-12-05 20:49:57.943465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.831 [2024-12-05 20:49:57.943496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.831 qpair failed and we were unable to recover it. 00:30:04.831 [2024-12-05 20:49:57.943634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.831 [2024-12-05 20:49:57.943666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.831 qpair failed and we were unable to recover it. 00:30:04.831 [2024-12-05 20:49:57.943792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.831 [2024-12-05 20:49:57.943823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.831 qpair failed and we were unable to recover it. 00:30:04.831 [2024-12-05 20:49:57.943944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.831 [2024-12-05 20:49:57.943975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.831 qpair failed and we were unable to recover it. 00:30:04.831 [2024-12-05 20:49:57.944091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.831 [2024-12-05 20:49:57.944122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.831 qpair failed and we were unable to recover it. 00:30:04.831 [2024-12-05 20:49:57.944297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.831 [2024-12-05 20:49:57.944368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.944669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.944705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.944887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.944919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.945123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.945160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.945386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.945418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.945556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.945589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.945766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.945797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.945968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.945998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.946263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.946296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.946595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.946626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.946811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.946843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.947052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.947093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.947282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.947314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.947427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.947458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.947663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.947696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.947818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.947848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.947968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.948000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.948222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.948254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.948384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.948416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.948670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.948701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.948819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.948850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.949095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.949127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.949329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.949361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.949538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.949569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.949818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.949848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.949969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.950000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.950237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.950270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.950517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.950553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.950661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.950693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.950912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.950944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.951050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.951093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.951276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.951308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.951496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.951528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.832 qpair failed and we were unable to recover it. 00:30:04.832 [2024-12-05 20:49:57.951661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.832 [2024-12-05 20:49:57.951692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.951878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.951908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.952184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.952217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.952431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.952462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.952574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.952605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.952780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.952812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.952983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.953014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.953160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.953191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.953383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.953413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.953594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.953624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.953827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.953858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.954035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.954075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.954320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.954351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.954487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.954518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.954709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.954739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.954934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.954965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.955144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.955175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.955363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.955394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.955500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.955531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.955642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.955674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.955946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.955978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.956261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.956298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.956599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.956631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.956844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.956875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.957141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.957172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.957281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.957312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.957507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.957539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.957660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.957690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.957883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.957914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.958132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.958164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.958353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.958383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.958486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.958516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.958711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.958743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.833 qpair failed and we were unable to recover it. 00:30:04.833 [2024-12-05 20:49:57.958847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.833 [2024-12-05 20:49:57.958878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.959109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.959142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.959358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.959390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.959600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.959632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.959807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.959839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.959955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.959986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.960229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.960260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.960430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.960460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.960659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.960690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.960937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.960969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.961083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.961114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.961292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.961323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.961586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.961617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.961735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.961766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.961949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.961981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.962250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.962289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.962520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.962552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.962750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.962781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.962982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.963013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.963296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.963329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.963454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.963484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.963726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.963757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.963929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.963960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.964134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.964168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.964276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.964307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.964526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.964557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.964773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.964805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.964978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.965009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.965195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.965226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.965484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.965555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.965773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.965808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.966099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.966135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.966333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.966365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.966537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.834 [2024-12-05 20:49:57.966568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.834 qpair failed and we were unable to recover it. 00:30:04.834 [2024-12-05 20:49:57.966813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.835 [2024-12-05 20:49:57.966844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.835 qpair failed and we were unable to recover it. 00:30:04.835 [2024-12-05 20:49:57.967072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.835 [2024-12-05 20:49:57.967104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.835 qpair failed and we were unable to recover it. 00:30:04.835 [2024-12-05 20:49:57.967219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.835 [2024-12-05 20:49:57.967250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.835 qpair failed and we were unable to recover it. 00:30:04.835 [2024-12-05 20:49:57.967442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.835 [2024-12-05 20:49:57.967474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.835 qpair failed and we were unable to recover it. 00:30:04.835 [2024-12-05 20:49:57.967603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.835 [2024-12-05 20:49:57.967633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.835 qpair failed and we were unable to recover it. 00:30:04.835 [2024-12-05 20:49:57.967768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.835 [2024-12-05 20:49:57.967799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.835 qpair failed and we were unable to recover it. 00:30:04.835 [2024-12-05 20:49:57.968000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.835 [2024-12-05 20:49:57.968031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.835 qpair failed and we were unable to recover it. 00:30:04.835 [2024-12-05 20:49:57.968192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.835 [2024-12-05 20:49:57.968224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.835 qpair failed and we were unable to recover it. 00:30:04.835 [2024-12-05 20:49:57.968407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.835 [2024-12-05 20:49:57.968448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.835 qpair failed and we were unable to recover it. 00:30:04.835 [2024-12-05 20:49:57.968650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.835 [2024-12-05 20:49:57.968683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.835 qpair failed and we were unable to recover it. 00:30:04.835 [2024-12-05 20:49:57.969015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.835 [2024-12-05 20:49:57.969045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.835 qpair failed and we were unable to recover it. 00:30:04.835 [2024-12-05 20:49:57.969325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.835 [2024-12-05 20:49:57.969356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.835 qpair failed and we were unable to recover it. 00:30:04.835 [2024-12-05 20:49:57.969595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.835 [2024-12-05 20:49:57.969627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.835 qpair failed and we were unable to recover it. 00:30:04.835 [2024-12-05 20:49:57.969842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.835 [2024-12-05 20:49:57.969873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.835 qpair failed and we were unable to recover it. 00:30:04.835 [2024-12-05 20:49:57.970044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.835 [2024-12-05 20:49:57.970085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.835 qpair failed and we were unable to recover it. 00:30:04.835 [2024-12-05 20:49:57.970220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.835 [2024-12-05 20:49:57.970251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.835 qpair failed and we were unable to recover it. 00:30:04.835 [2024-12-05 20:49:57.970371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.835 [2024-12-05 20:49:57.970402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.835 qpair failed and we were unable to recover it. 00:30:04.835 [2024-12-05 20:49:57.970610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.835 [2024-12-05 20:49:57.970640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.835 qpair failed and we were unable to recover it. 00:30:04.835 [2024-12-05 20:49:57.970827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.835 [2024-12-05 20:49:57.970859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.835 qpair failed and we were unable to recover it. 00:30:04.835 [2024-12-05 20:49:57.971100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.835 [2024-12-05 20:49:57.971132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.835 qpair failed and we were unable to recover it. 00:30:04.835 [2024-12-05 20:49:57.971325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.835 [2024-12-05 20:49:57.971356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.835 qpair failed and we were unable to recover it. 00:30:04.835 [2024-12-05 20:49:57.971543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.835 [2024-12-05 20:49:57.971574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.835 qpair failed and we were unable to recover it. 00:30:04.835 [2024-12-05 20:49:57.971769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.835 [2024-12-05 20:49:57.971800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.835 qpair failed and we were unable to recover it. 00:30:04.835 [2024-12-05 20:49:57.971992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.835 [2024-12-05 20:49:57.972023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.835 qpair failed and we were unable to recover it. 00:30:04.835 [2024-12-05 20:49:57.972209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.835 [2024-12-05 20:49:57.972241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.835 qpair failed and we were unable to recover it. 00:30:04.835 [2024-12-05 20:49:57.972431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.835 [2024-12-05 20:49:57.972463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.835 qpair failed and we were unable to recover it. 00:30:04.835 [2024-12-05 20:49:57.972587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.835 [2024-12-05 20:49:57.972619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.835 qpair failed and we were unable to recover it. 00:30:04.835 [2024-12-05 20:49:57.972823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.835 [2024-12-05 20:49:57.972853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.835 qpair failed and we were unable to recover it. 00:30:04.835 [2024-12-05 20:49:57.972965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.835 [2024-12-05 20:49:57.972996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.835 qpair failed and we were unable to recover it. 00:30:04.835 [2024-12-05 20:49:57.973265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.835 [2024-12-05 20:49:57.973297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.835 qpair failed and we were unable to recover it. 00:30:04.835 [2024-12-05 20:49:57.973571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.836 [2024-12-05 20:49:57.973602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.836 qpair failed and we were unable to recover it. 00:30:04.836 [2024-12-05 20:49:57.973785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.836 [2024-12-05 20:49:57.973816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.836 qpair failed and we were unable to recover it. 00:30:04.836 [2024-12-05 20:49:57.973931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.836 [2024-12-05 20:49:57.973962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.836 qpair failed and we were unable to recover it. 00:30:04.836 [2024-12-05 20:49:57.974080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.836 [2024-12-05 20:49:57.974112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.836 qpair failed and we were unable to recover it. 00:30:04.836 [2024-12-05 20:49:57.974356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.836 [2024-12-05 20:49:57.974387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.836 qpair failed and we were unable to recover it. 00:30:04.836 [2024-12-05 20:49:57.974576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.836 [2024-12-05 20:49:57.974608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.836 qpair failed and we were unable to recover it. 00:30:04.836 [2024-12-05 20:49:57.974794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.836 [2024-12-05 20:49:57.974826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.836 qpair failed and we were unable to recover it. 00:30:04.836 [2024-12-05 20:49:57.974945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.836 [2024-12-05 20:49:57.974976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.836 qpair failed and we were unable to recover it. 00:30:04.836 [2024-12-05 20:49:57.975101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.836 [2024-12-05 20:49:57.975133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.836 qpair failed and we were unable to recover it. 00:30:04.836 [2024-12-05 20:49:57.975377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.836 [2024-12-05 20:49:57.975409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.836 qpair failed and we were unable to recover it. 00:30:04.836 [2024-12-05 20:49:57.975735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.836 [2024-12-05 20:49:57.975767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.836 qpair failed and we were unable to recover it. 00:30:04.836 [2024-12-05 20:49:57.975938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.836 [2024-12-05 20:49:57.975970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.836 qpair failed and we were unable to recover it. 00:30:04.836 [2024-12-05 20:49:57.976159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.836 [2024-12-05 20:49:57.976191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.836 qpair failed and we were unable to recover it. 00:30:04.836 [2024-12-05 20:49:57.976370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.836 [2024-12-05 20:49:57.976402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.836 qpair failed and we were unable to recover it. 00:30:04.836 [2024-12-05 20:49:57.976615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.836 [2024-12-05 20:49:57.976646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.836 qpair failed and we were unable to recover it. 00:30:04.836 [2024-12-05 20:49:57.976902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.836 [2024-12-05 20:49:57.976934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.836 qpair failed and we were unable to recover it. 00:30:04.836 [2024-12-05 20:49:57.977037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.836 [2024-12-05 20:49:57.977078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.836 qpair failed and we were unable to recover it. 00:30:04.836 [2024-12-05 20:49:57.977273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.836 [2024-12-05 20:49:57.977304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.836 qpair failed and we were unable to recover it. 00:30:04.836 [2024-12-05 20:49:57.977435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.836 [2024-12-05 20:49:57.977473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.836 qpair failed and we were unable to recover it. 00:30:04.836 [2024-12-05 20:49:57.977750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.836 [2024-12-05 20:49:57.977782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.836 qpair failed and we were unable to recover it. 00:30:04.836 [2024-12-05 20:49:57.977899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.836 [2024-12-05 20:49:57.977930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.836 qpair failed and we were unable to recover it. 00:30:04.836 [2024-12-05 20:49:57.978169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.836 [2024-12-05 20:49:57.978201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.836 qpair failed and we were unable to recover it. 00:30:04.836 [2024-12-05 20:49:57.978422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.836 [2024-12-05 20:49:57.978453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.836 qpair failed and we were unable to recover it. 00:30:04.836 [2024-12-05 20:49:57.978739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.836 [2024-12-05 20:49:57.978769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.836 qpair failed and we were unable to recover it. 00:30:04.836 [2024-12-05 20:49:57.978947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.836 [2024-12-05 20:49:57.978977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.836 qpair failed and we were unable to recover it. 00:30:04.836 [2024-12-05 20:49:57.979159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.836 [2024-12-05 20:49:57.979191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.836 qpair failed and we were unable to recover it. 00:30:04.836 [2024-12-05 20:49:57.979361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.836 [2024-12-05 20:49:57.979392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.836 qpair failed and we were unable to recover it. 00:30:04.836 [2024-12-05 20:49:57.979513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.836 [2024-12-05 20:49:57.979544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.836 qpair failed and we were unable to recover it. 00:30:04.836 [2024-12-05 20:49:57.979717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.836 [2024-12-05 20:49:57.979748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.836 qpair failed and we were unable to recover it. 00:30:04.836 [2024-12-05 20:49:57.979962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.836 [2024-12-05 20:49:57.979993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.836 qpair failed and we were unable to recover it. 00:30:04.836 [2024-12-05 20:49:57.980175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.836 [2024-12-05 20:49:57.980207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.836 qpair failed and we were unable to recover it. 00:30:04.836 [2024-12-05 20:49:57.980457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.837 [2024-12-05 20:49:57.980488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.837 qpair failed and we were unable to recover it. 00:30:04.837 [2024-12-05 20:49:57.980666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.837 [2024-12-05 20:49:57.980698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.837 qpair failed and we were unable to recover it. 00:30:04.837 [2024-12-05 20:49:57.980948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.837 [2024-12-05 20:49:57.980980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.837 qpair failed and we were unable to recover it. 00:30:04.837 [2024-12-05 20:49:57.981111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.837 [2024-12-05 20:49:57.981142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.837 qpair failed and we were unable to recover it. 00:30:04.837 [2024-12-05 20:49:57.981245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.837 [2024-12-05 20:49:57.981276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.837 qpair failed and we were unable to recover it. 00:30:04.837 [2024-12-05 20:49:57.981457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.837 [2024-12-05 20:49:57.981488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.837 qpair failed and we were unable to recover it. 00:30:04.837 [2024-12-05 20:49:57.981736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.837 [2024-12-05 20:49:57.981766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.837 qpair failed and we were unable to recover it. 00:30:04.837 [2024-12-05 20:49:57.981887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.837 [2024-12-05 20:49:57.981918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.837 qpair failed and we were unable to recover it. 00:30:04.837 [2024-12-05 20:49:57.982089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.837 [2024-12-05 20:49:57.982121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.837 qpair failed and we were unable to recover it. 00:30:04.837 [2024-12-05 20:49:57.982328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.837 [2024-12-05 20:49:57.982358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.837 qpair failed and we were unable to recover it. 00:30:04.837 [2024-12-05 20:49:57.982628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.837 [2024-12-05 20:49:57.982659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.837 qpair failed and we were unable to recover it. 00:30:04.837 [2024-12-05 20:49:57.982785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.837 [2024-12-05 20:49:57.982816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.837 qpair failed and we were unable to recover it. 00:30:04.837 [2024-12-05 20:49:57.983078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.837 [2024-12-05 20:49:57.983112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.837 qpair failed and we were unable to recover it. 00:30:04.837 [2024-12-05 20:49:57.983287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.837 [2024-12-05 20:49:57.983317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.837 qpair failed and we were unable to recover it. 00:30:04.837 [2024-12-05 20:49:57.983572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.837 [2024-12-05 20:49:57.983604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.837 qpair failed and we were unable to recover it. 00:30:04.837 [2024-12-05 20:49:57.983849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.837 [2024-12-05 20:49:57.983879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.837 qpair failed and we were unable to recover it. 00:30:04.837 [2024-12-05 20:49:57.984050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.837 [2024-12-05 20:49:57.984093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.837 qpair failed and we were unable to recover it. 00:30:04.837 [2024-12-05 20:49:57.984293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.837 [2024-12-05 20:49:57.984325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.837 qpair failed and we were unable to recover it. 00:30:04.837 [2024-12-05 20:49:57.984567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.837 [2024-12-05 20:49:57.984598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.837 qpair failed and we were unable to recover it. 00:30:04.837 [2024-12-05 20:49:57.984770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.837 [2024-12-05 20:49:57.984801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.837 qpair failed and we were unable to recover it. 00:30:04.837 [2024-12-05 20:49:57.984975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.837 [2024-12-05 20:49:57.985006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.837 qpair failed and we were unable to recover it. 00:30:04.837 [2024-12-05 20:49:57.985127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.837 [2024-12-05 20:49:57.985159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.837 qpair failed and we were unable to recover it. 00:30:04.837 [2024-12-05 20:49:57.985288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.837 [2024-12-05 20:49:57.985319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.837 qpair failed and we were unable to recover it. 00:30:04.837 [2024-12-05 20:49:57.985533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.837 [2024-12-05 20:49:57.985563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.837 qpair failed and we were unable to recover it. 00:30:04.837 [2024-12-05 20:49:57.985679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.837 [2024-12-05 20:49:57.985710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.837 qpair failed and we were unable to recover it. 00:30:04.837 [2024-12-05 20:49:57.985893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.837 [2024-12-05 20:49:57.985925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.837 qpair failed and we were unable to recover it. 00:30:04.837 [2024-12-05 20:49:57.986124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.837 [2024-12-05 20:49:57.986156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.837 qpair failed and we were unable to recover it. 00:30:04.837 [2024-12-05 20:49:57.986328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.986365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.986548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.986579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.986773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.986804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.986928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.986960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.987086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.987119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.987336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.987367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.987580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.987612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.987885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.987916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.988093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.988124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.988369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.988400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.988585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.988617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.988787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.988819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.989088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.989120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.989262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.989292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.989481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.989512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.989785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.989818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.989991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.990022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.990207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.990240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.990423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.990454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.990694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.990726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.990843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.990874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.991046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.991090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.991420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.991450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.991640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.991671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.991795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.991826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.991956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.991988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.992290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.992323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.992501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.992532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.992716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.992747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.993019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.993051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.993237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.993269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.993399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.838 [2024-12-05 20:49:57.993431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.838 qpair failed and we were unable to recover it. 00:30:04.838 [2024-12-05 20:49:57.993566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.839 [2024-12-05 20:49:57.993597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.839 qpair failed and we were unable to recover it. 00:30:04.839 [2024-12-05 20:49:57.993712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.839 [2024-12-05 20:49:57.993743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.839 qpair failed and we were unable to recover it. 00:30:04.839 [2024-12-05 20:49:57.993929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.839 [2024-12-05 20:49:57.993960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.839 qpair failed and we were unable to recover it. 00:30:04.839 [2024-12-05 20:49:57.994085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.839 [2024-12-05 20:49:57.994117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.839 qpair failed and we were unable to recover it. 00:30:04.839 [2024-12-05 20:49:57.994306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.839 [2024-12-05 20:49:57.994337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.839 qpair failed and we were unable to recover it. 00:30:04.839 [2024-12-05 20:49:57.994454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.839 [2024-12-05 20:49:57.994484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.839 qpair failed and we were unable to recover it. 00:30:04.839 [2024-12-05 20:49:57.994678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.839 [2024-12-05 20:49:57.994709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.839 qpair failed and we were unable to recover it. 00:30:04.839 [2024-12-05 20:49:57.994882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.839 [2024-12-05 20:49:57.994913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.839 qpair failed and we were unable to recover it. 00:30:04.839 [2024-12-05 20:49:57.995129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.839 [2024-12-05 20:49:57.995168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.839 qpair failed and we were unable to recover it. 00:30:04.839 [2024-12-05 20:49:57.995272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.839 [2024-12-05 20:49:57.995303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.839 qpair failed and we were unable to recover it. 00:30:04.839 [2024-12-05 20:49:57.995472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.839 [2024-12-05 20:49:57.995503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.839 qpair failed and we were unable to recover it. 00:30:04.839 [2024-12-05 20:49:57.995784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.839 [2024-12-05 20:49:57.995816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.839 qpair failed and we were unable to recover it. 00:30:04.839 [2024-12-05 20:49:57.996005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.839 [2024-12-05 20:49:57.996036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.839 qpair failed and we were unable to recover it. 00:30:04.839 [2024-12-05 20:49:57.996174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.839 [2024-12-05 20:49:57.996207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.839 qpair failed and we were unable to recover it. 00:30:04.839 [2024-12-05 20:49:57.996407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.839 [2024-12-05 20:49:57.996438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.839 qpair failed and we were unable to recover it. 00:30:04.839 [2024-12-05 20:49:57.996554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.839 [2024-12-05 20:49:57.996586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.839 qpair failed and we were unable to recover it. 00:30:04.839 [2024-12-05 20:49:57.996695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.839 [2024-12-05 20:49:57.996726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.839 qpair failed and we were unable to recover it. 00:30:04.839 [2024-12-05 20:49:57.996923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.839 [2024-12-05 20:49:57.996955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.839 qpair failed and we were unable to recover it. 00:30:04.839 [2024-12-05 20:49:57.997148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.839 [2024-12-05 20:49:57.997181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.839 qpair failed and we were unable to recover it. 00:30:04.839 [2024-12-05 20:49:57.997303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.839 [2024-12-05 20:49:57.997333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.839 qpair failed and we were unable to recover it. 00:30:04.839 [2024-12-05 20:49:57.997576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.839 [2024-12-05 20:49:57.997608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.839 qpair failed and we were unable to recover it. 00:30:04.839 [2024-12-05 20:49:57.997744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.839 [2024-12-05 20:49:57.997775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.839 qpair failed and we were unable to recover it. 00:30:04.839 [2024-12-05 20:49:57.997901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.839 [2024-12-05 20:49:57.997933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.839 qpair failed and we were unable to recover it. 00:30:04.839 [2024-12-05 20:49:57.998169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.839 [2024-12-05 20:49:57.998202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.839 qpair failed and we were unable to recover it. 00:30:04.839 [2024-12-05 20:49:57.998313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.839 [2024-12-05 20:49:57.998344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.839 qpair failed and we were unable to recover it. 00:30:04.839 [2024-12-05 20:49:57.998546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.839 [2024-12-05 20:49:57.998577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.839 qpair failed and we were unable to recover it. 00:30:04.839 [2024-12-05 20:49:57.998681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.839 [2024-12-05 20:49:57.998713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.839 qpair failed and we were unable to recover it. 00:30:04.839 [2024-12-05 20:49:57.998897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.839 [2024-12-05 20:49:57.998928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.839 qpair failed and we were unable to recover it. 00:30:04.839 [2024-12-05 20:49:57.999057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.839 [2024-12-05 20:49:57.999097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.839 qpair failed and we were unable to recover it. 00:30:04.839 [2024-12-05 20:49:57.999221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.839 [2024-12-05 20:49:57.999252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.839 qpair failed and we were unable to recover it. 00:30:04.839 [2024-12-05 20:49:57.999500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.839 [2024-12-05 20:49:57.999530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.839 qpair failed and we were unable to recover it. 00:30:04.839 [2024-12-05 20:49:57.999707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:57.999737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.840 [2024-12-05 20:49:57.999868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:57.999900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.840 [2024-12-05 20:49:58.000012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:58.000044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.840 [2024-12-05 20:49:58.000297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:58.000328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.840 [2024-12-05 20:49:58.000556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:58.000626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.840 [2024-12-05 20:49:58.000851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:58.000887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.840 [2024-12-05 20:49:58.001014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:58.001047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.840 [2024-12-05 20:49:58.001249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:58.001281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.840 [2024-12-05 20:49:58.001493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:58.001524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.840 [2024-12-05 20:49:58.001701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:58.001732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.840 [2024-12-05 20:49:58.001856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:58.001887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.840 [2024-12-05 20:49:58.002072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:58.002106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.840 [2024-12-05 20:49:58.002295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:58.002326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.840 [2024-12-05 20:49:58.002451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:58.002483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.840 [2024-12-05 20:49:58.002594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:58.002625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.840 [2024-12-05 20:49:58.002820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:58.002852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.840 [2024-12-05 20:49:58.003083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:58.003128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.840 [2024-12-05 20:49:58.003424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:58.003466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.840 [2024-12-05 20:49:58.003597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:58.003628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.840 [2024-12-05 20:49:58.003834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:58.003865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.840 [2024-12-05 20:49:58.004055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:58.004100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.840 [2024-12-05 20:49:58.004284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:58.004315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.840 [2024-12-05 20:49:58.004498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:58.004529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.840 [2024-12-05 20:49:58.004804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:58.004834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.840 [2024-12-05 20:49:58.005028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:58.005070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.840 [2024-12-05 20:49:58.005345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:58.005376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.840 [2024-12-05 20:49:58.005494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:58.005525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.840 [2024-12-05 20:49:58.005695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:58.005727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.840 [2024-12-05 20:49:58.005850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:58.005881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.840 [2024-12-05 20:49:58.005996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:58.006029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.840 [2024-12-05 20:49:58.006236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:58.006269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.840 [2024-12-05 20:49:58.006454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.840 [2024-12-05 20:49:58.006486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.840 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.006735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.006766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.006960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.006991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.007176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.007209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.007324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.007355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.007569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.007600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.007731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.007763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.007938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.007969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.008161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.008193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.008324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.008356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.008477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.008509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.008687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.008717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.008975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.009006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.009145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.009193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.009308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.009339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.009541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.009572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.009791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.009822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.010075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.010108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.010218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.010249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.010375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.010407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.010629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.010661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.010928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.010959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.011154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.011187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.011301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.011333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.011448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.011480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.011668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.011700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.011944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.011976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.012227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.012260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.012480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.012510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.012699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.012732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.012914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.012945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.013169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.013200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.841 [2024-12-05 20:49:58.013474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.841 [2024-12-05 20:49:58.013506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.841 qpair failed and we were unable to recover it. 00:30:04.842 [2024-12-05 20:49:58.013684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.842 [2024-12-05 20:49:58.013715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.842 qpair failed and we were unable to recover it. 00:30:04.842 [2024-12-05 20:49:58.013892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.842 [2024-12-05 20:49:58.013923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.842 qpair failed and we were unable to recover it. 00:30:04.842 [2024-12-05 20:49:58.014035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.842 [2024-12-05 20:49:58.014089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.842 qpair failed and we were unable to recover it. 00:30:04.842 [2024-12-05 20:49:58.014287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.842 [2024-12-05 20:49:58.014319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.842 qpair failed and we were unable to recover it. 00:30:04.842 [2024-12-05 20:49:58.014540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.842 [2024-12-05 20:49:58.014571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.842 qpair failed and we were unable to recover it. 00:30:04.842 [2024-12-05 20:49:58.014775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.842 [2024-12-05 20:49:58.014807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.842 qpair failed and we were unable to recover it. 00:30:04.842 [2024-12-05 20:49:58.015054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.842 [2024-12-05 20:49:58.015097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.842 qpair failed and we were unable to recover it. 00:30:04.842 [2024-12-05 20:49:58.015286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.842 [2024-12-05 20:49:58.015318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.842 qpair failed and we were unable to recover it. 00:30:04.842 [2024-12-05 20:49:58.015429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.842 [2024-12-05 20:49:58.015461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.842 qpair failed and we were unable to recover it. 00:30:04.842 [2024-12-05 20:49:58.015686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.842 [2024-12-05 20:49:58.015717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.842 qpair failed and we were unable to recover it. 00:30:04.842 [2024-12-05 20:49:58.015922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.842 [2024-12-05 20:49:58.015955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.842 qpair failed and we were unable to recover it. 00:30:04.842 [2024-12-05 20:49:58.016079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.842 [2024-12-05 20:49:58.016112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.842 qpair failed and we were unable to recover it. 00:30:04.842 [2024-12-05 20:49:58.016214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.842 [2024-12-05 20:49:58.016244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.842 qpair failed and we were unable to recover it. 00:30:04.842 [2024-12-05 20:49:58.016441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.842 [2024-12-05 20:49:58.016473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.842 qpair failed and we were unable to recover it. 00:30:04.842 [2024-12-05 20:49:58.016648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.842 [2024-12-05 20:49:58.016680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.842 qpair failed and we were unable to recover it. 00:30:04.842 [2024-12-05 20:49:58.016962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.842 [2024-12-05 20:49:58.016995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.842 qpair failed and we were unable to recover it. 00:30:04.842 [2024-12-05 20:49:58.017270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.842 [2024-12-05 20:49:58.017305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.842 qpair failed and we were unable to recover it. 00:30:04.842 [2024-12-05 20:49:58.017520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.842 [2024-12-05 20:49:58.017551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.842 qpair failed and we were unable to recover it. 00:30:04.842 [2024-12-05 20:49:58.017726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.842 [2024-12-05 20:49:58.017758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.842 qpair failed and we were unable to recover it. 00:30:04.842 [2024-12-05 20:49:58.018002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.842 [2024-12-05 20:49:58.018033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.842 qpair failed and we were unable to recover it. 00:30:04.842 [2024-12-05 20:49:58.018163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.842 [2024-12-05 20:49:58.018201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.842 qpair failed and we were unable to recover it. 00:30:04.842 [2024-12-05 20:49:58.018457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.842 [2024-12-05 20:49:58.018488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.842 qpair failed and we were unable to recover it. 00:30:04.842 [2024-12-05 20:49:58.018707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.842 [2024-12-05 20:49:58.018739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.842 qpair failed and we were unable to recover it. 00:30:04.842 [2024-12-05 20:49:58.019017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.842 [2024-12-05 20:49:58.019049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.842 qpair failed and we were unable to recover it. 00:30:04.842 [2024-12-05 20:49:58.019278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.842 [2024-12-05 20:49:58.019309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.842 qpair failed and we were unable to recover it. 00:30:04.842 [2024-12-05 20:49:58.019522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.842 [2024-12-05 20:49:58.019554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.842 qpair failed and we were unable to recover it. 00:30:04.842 [2024-12-05 20:49:58.019727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.842 [2024-12-05 20:49:58.019758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.842 qpair failed and we were unable to recover it. 00:30:04.842 [2024-12-05 20:49:58.019873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.842 [2024-12-05 20:49:58.019904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.842 qpair failed and we were unable to recover it. 00:30:04.842 [2024-12-05 20:49:58.020028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.020069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.843 [2024-12-05 20:49:58.020257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.020290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.843 [2024-12-05 20:49:58.020504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.020535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.843 [2024-12-05 20:49:58.020766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.020798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.843 [2024-12-05 20:49:58.021099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.021133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.843 [2024-12-05 20:49:58.021322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.021355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.843 [2024-12-05 20:49:58.021601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.021633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.843 [2024-12-05 20:49:58.021825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.021857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.843 [2024-12-05 20:49:58.021961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.021992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.843 [2024-12-05 20:49:58.022178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.022211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.843 [2024-12-05 20:49:58.022406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.022437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.843 [2024-12-05 20:49:58.022642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.022673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.843 [2024-12-05 20:49:58.022880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.022911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.843 [2024-12-05 20:49:58.023125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.023157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.843 [2024-12-05 20:49:58.023357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.023388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.843 [2024-12-05 20:49:58.023574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.023606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.843 [2024-12-05 20:49:58.023795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.023826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.843 [2024-12-05 20:49:58.024038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.024080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.843 [2024-12-05 20:49:58.024273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.024305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.843 [2024-12-05 20:49:58.024425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.024457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.843 [2024-12-05 20:49:58.024647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.024679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.843 [2024-12-05 20:49:58.024988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.025019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.843 [2024-12-05 20:49:58.025164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.025196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.843 [2024-12-05 20:49:58.025373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.025403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.843 [2024-12-05 20:49:58.025509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.025539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.843 [2024-12-05 20:49:58.025673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.025705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.843 [2024-12-05 20:49:58.025895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.025926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.843 [2024-12-05 20:49:58.026112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.026143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.843 [2024-12-05 20:49:58.026401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.026433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.843 [2024-12-05 20:49:58.026603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.026635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.843 [2024-12-05 20:49:58.026873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.026903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.843 [2024-12-05 20:49:58.027208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.843 [2024-12-05 20:49:58.027240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.843 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.027431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.027469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.027644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.027676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.027863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.027894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.028077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.028110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.028241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.028272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.028389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.028421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.028611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.028642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.028861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.028892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.029086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.029118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.029240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.029272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.029384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.029416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.029519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.029549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.029680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.029711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.029854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.029885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.030019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.030050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.030340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.030371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.030490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.030521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.030703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.030735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.030997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.031028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.031210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.031243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.031430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.031461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.031645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.031676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.031949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.031980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.032183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.032215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.032391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.032422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.032543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.032574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.032682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.032713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.032904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.032936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.033197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.033230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.033357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.033389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.033604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.033635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.033809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.033841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.034029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.034069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.844 qpair failed and we were unable to recover it. 00:30:04.844 [2024-12-05 20:49:58.034245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.844 [2024-12-05 20:49:58.034276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.034400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.034432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.034697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.034728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.034904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.034936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.035036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.035077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.035319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.035350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.035463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.035495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.035792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.035830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.035949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.035980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.036265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.036297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.036431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.036463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.036637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.036669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.036846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.036879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.036984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.037017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.037262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.037296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.037407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.037439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.037623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.037655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.037876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.037909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.038186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.038218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.038409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.038439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.038561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.038592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.038728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.038760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.038877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.038908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.039123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.039154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.039344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.039375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.039562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.039592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.039836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.039868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.039979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.040012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.040266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.040297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.040411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.040443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.040619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.040650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.040830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.040861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.845 [2024-12-05 20:49:58.041109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.845 [2024-12-05 20:49:58.041143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.845 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.041259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.041291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.846 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.041492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.041523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.846 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.041720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.041752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.846 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.041938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.041970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.846 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.042151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.042183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.846 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.042291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.042322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.846 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.042456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.042489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.846 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.042688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.042720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.846 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.042849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.042881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.846 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.043093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.043126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.846 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.043308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.043339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.846 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.043466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.043497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.846 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.043602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.043633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.846 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.043924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.043956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.846 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.044203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.044243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.846 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.044383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.044415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.846 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.044679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.044710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.846 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.044912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.044944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.846 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.045189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.045220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.846 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.045437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.045469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.846 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.045584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.045615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.846 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.045826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.045858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.846 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.045995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.046026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.846 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.046251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.046283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.846 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.046397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.046429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.846 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.046529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.046559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.846 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.046821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.046852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.846 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.047080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.047113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.846 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.047228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.047260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.846 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.047371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.047402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.846 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.047509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.047541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.846 qpair failed and we were unable to recover it. 00:30:04.846 [2024-12-05 20:49:58.047730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.846 [2024-12-05 20:49:58.047761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.847 qpair failed and we were unable to recover it. 00:30:04.847 [2024-12-05 20:49:58.048002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.847 [2024-12-05 20:49:58.048034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.847 qpair failed and we were unable to recover it. 00:30:04.847 [2024-12-05 20:49:58.048179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.847 [2024-12-05 20:49:58.048211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.847 qpair failed and we were unable to recover it. 00:30:04.847 [2024-12-05 20:49:58.048405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.847 [2024-12-05 20:49:58.048438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.847 qpair failed and we were unable to recover it. 00:30:04.847 [2024-12-05 20:49:58.048734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.847 [2024-12-05 20:49:58.048766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.847 qpair failed and we were unable to recover it. 00:30:04.847 [2024-12-05 20:49:58.049014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.847 [2024-12-05 20:49:58.049046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.847 qpair failed and we were unable to recover it. 00:30:04.847 [2024-12-05 20:49:58.049165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.847 [2024-12-05 20:49:58.049196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.847 qpair failed and we were unable to recover it. 00:30:04.847 [2024-12-05 20:49:58.049316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.847 [2024-12-05 20:49:58.049347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.847 qpair failed and we were unable to recover it. 00:30:04.847 [2024-12-05 20:49:58.049525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.847 [2024-12-05 20:49:58.049557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.847 qpair failed and we were unable to recover it. 00:30:04.847 [2024-12-05 20:49:58.049801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.847 [2024-12-05 20:49:58.049832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.847 qpair failed and we were unable to recover it. 00:30:04.847 [2024-12-05 20:49:58.050086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.847 [2024-12-05 20:49:58.050119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.847 qpair failed and we were unable to recover it. 00:30:04.847 [2024-12-05 20:49:58.050409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.847 [2024-12-05 20:49:58.050440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.847 qpair failed and we were unable to recover it. 00:30:04.847 [2024-12-05 20:49:58.050613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.847 [2024-12-05 20:49:58.050644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.847 qpair failed and we were unable to recover it. 00:30:04.847 [2024-12-05 20:49:58.050776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.847 [2024-12-05 20:49:58.050807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.847 qpair failed and we were unable to recover it. 00:30:04.847 [2024-12-05 20:49:58.050979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.847 [2024-12-05 20:49:58.051011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.847 qpair failed and we were unable to recover it. 00:30:04.847 [2024-12-05 20:49:58.051196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.847 [2024-12-05 20:49:58.051228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.847 qpair failed and we were unable to recover it. 00:30:04.847 [2024-12-05 20:49:58.051411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.847 [2024-12-05 20:49:58.051442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.847 qpair failed and we were unable to recover it. 00:30:04.847 [2024-12-05 20:49:58.051565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.847 [2024-12-05 20:49:58.051597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.847 qpair failed and we were unable to recover it. 00:30:04.847 [2024-12-05 20:49:58.051794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.847 [2024-12-05 20:49:58.051824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.847 qpair failed and we were unable to recover it. 00:30:04.847 [2024-12-05 20:49:58.051925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.847 [2024-12-05 20:49:58.051956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.847 qpair failed and we were unable to recover it. 00:30:04.847 [2024-12-05 20:49:58.052227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.847 [2024-12-05 20:49:58.052258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.847 qpair failed and we were unable to recover it. 00:30:04.847 [2024-12-05 20:49:58.052378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.847 [2024-12-05 20:49:58.052410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.847 qpair failed and we were unable to recover it. 00:30:04.847 [2024-12-05 20:49:58.052527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.847 [2024-12-05 20:49:58.052558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.847 qpair failed and we were unable to recover it. 00:30:04.847 [2024-12-05 20:49:58.052665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.847 [2024-12-05 20:49:58.052707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.847 qpair failed and we were unable to recover it. 00:30:04.847 [2024-12-05 20:49:58.052893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.847 [2024-12-05 20:49:58.052923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.847 qpair failed and we were unable to recover it. 00:30:04.847 [2024-12-05 20:49:58.053106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.847 [2024-12-05 20:49:58.053142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.847 qpair failed and we were unable to recover it. 00:30:04.847 [2024-12-05 20:49:58.053275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.847 [2024-12-05 20:49:58.053307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.847 qpair failed and we were unable to recover it. 00:30:04.847 [2024-12-05 20:49:58.053431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.847 [2024-12-05 20:49:58.053462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.847 qpair failed and we were unable to recover it. 00:30:04.847 [2024-12-05 20:49:58.053580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.847 [2024-12-05 20:49:58.053612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.053884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.053916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.054040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.054083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.054203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.054234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.054414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.054446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.054629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.054659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.054789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.054820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.054995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.055026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.055323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.055356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.055491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.055522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.055767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.055799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.055915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.055946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.056169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.056202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.056472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.056504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.056674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.056706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.056826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.056857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.056974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.057006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.057214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.057247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.057441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.057471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.057651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.057683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.057790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.057822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.057946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.057979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.058123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.058155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.058280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.058312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.058430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.058462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.058632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.058663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.058777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.058809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.059022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.059054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.059243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.059276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.059398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.059429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.059675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.059707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.059912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.059943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.848 qpair failed and we were unable to recover it. 00:30:04.848 [2024-12-05 20:49:58.060097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.848 [2024-12-05 20:49:58.060130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.060274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.060306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.060483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.060516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.060710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.060748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.060880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.060912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.061091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.061124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.061252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.061283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.061472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.061504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.061711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.061741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.061863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.061894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.062096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.062128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.062301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.062333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.062604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.062635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.062749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.062781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.062964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.062995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.063198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.063229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.063349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.063381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.063577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.063609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.063718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.063749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.063931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.063964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.064140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.064171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.064373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.064404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.064642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.064673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.064947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.064978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.065223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.065256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.065379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.065412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.065532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.065564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.065700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.065731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.065949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.065981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.066097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.066128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.066270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.066303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.066423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.066454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.066650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.066681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.066810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.849 [2024-12-05 20:49:58.066842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.849 qpair failed and we were unable to recover it. 00:30:04.849 [2024-12-05 20:49:58.067114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-12-05 20:49:58.067146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.850 qpair failed and we were unable to recover it. 00:30:04.850 [2024-12-05 20:49:58.067337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-12-05 20:49:58.067367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.850 qpair failed and we were unable to recover it. 00:30:04.850 [2024-12-05 20:49:58.067543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-12-05 20:49:58.067575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.850 qpair failed and we were unable to recover it. 00:30:04.850 [2024-12-05 20:49:58.067713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-12-05 20:49:58.067744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.850 qpair failed and we were unable to recover it. 00:30:04.850 [2024-12-05 20:49:58.067926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-12-05 20:49:58.067957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.850 qpair failed and we were unable to recover it. 00:30:04.850 [2024-12-05 20:49:58.068144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-12-05 20:49:58.068176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.850 qpair failed and we were unable to recover it. 00:30:04.850 [2024-12-05 20:49:58.068282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-12-05 20:49:58.068314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.850 qpair failed and we were unable to recover it. 00:30:04.850 [2024-12-05 20:49:58.068441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-12-05 20:49:58.068472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.850 qpair failed and we were unable to recover it. 00:30:04.850 [2024-12-05 20:49:58.068714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-12-05 20:49:58.068746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.850 qpair failed and we were unable to recover it. 00:30:04.850 [2024-12-05 20:49:58.068928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-12-05 20:49:58.068965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.850 qpair failed and we were unable to recover it. 00:30:04.850 [2024-12-05 20:49:58.069079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-12-05 20:49:58.069112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.850 qpair failed and we were unable to recover it. 00:30:04.850 [2024-12-05 20:49:58.069234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-12-05 20:49:58.069265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.850 qpair failed and we were unable to recover it. 00:30:04.850 [2024-12-05 20:49:58.069455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-12-05 20:49:58.069487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.850 qpair failed and we were unable to recover it. 00:30:04.850 [2024-12-05 20:49:58.069766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-12-05 20:49:58.069797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.850 qpair failed and we were unable to recover it. 00:30:04.850 [2024-12-05 20:49:58.069967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-12-05 20:49:58.069998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.850 qpair failed and we were unable to recover it. 00:30:04.850 [2024-12-05 20:49:58.070284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-12-05 20:49:58.070316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.850 qpair failed and we were unable to recover it. 00:30:04.850 [2024-12-05 20:49:58.070533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-12-05 20:49:58.070564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.850 qpair failed and we were unable to recover it. 00:30:04.850 [2024-12-05 20:49:58.070695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-12-05 20:49:58.070728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.850 qpair failed and we were unable to recover it. 00:30:04.850 [2024-12-05 20:49:58.070853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-12-05 20:49:58.070885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.850 qpair failed and we were unable to recover it. 00:30:04.850 [2024-12-05 20:49:58.071000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-12-05 20:49:58.071032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.850 qpair failed and we were unable to recover it. 00:30:04.850 [2024-12-05 20:49:58.071158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-12-05 20:49:58.071189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.850 qpair failed and we were unable to recover it. 00:30:04.850 [2024-12-05 20:49:58.071291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-12-05 20:49:58.071323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.850 qpair failed and we were unable to recover it. 00:30:04.850 [2024-12-05 20:49:58.071553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-12-05 20:49:58.071585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.850 qpair failed and we were unable to recover it. 00:30:04.850 [2024-12-05 20:49:58.071855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-12-05 20:49:58.071887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.850 qpair failed and we were unable to recover it. 00:30:04.850 [2024-12-05 20:49:58.072011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-12-05 20:49:58.072043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.850 qpair failed and we were unable to recover it. 00:30:04.850 [2024-12-05 20:49:58.072236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-12-05 20:49:58.072269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.850 qpair failed and we were unable to recover it. 00:30:04.850 [2024-12-05 20:49:58.072442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-12-05 20:49:58.072473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.850 qpair failed and we were unable to recover it. 00:30:04.850 [2024-12-05 20:49:58.072746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-12-05 20:49:58.072778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.850 qpair failed and we were unable to recover it. 00:30:04.850 [2024-12-05 20:49:58.073078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-12-05 20:49:58.073111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.850 qpair failed and we were unable to recover it. 00:30:04.850 [2024-12-05 20:49:58.073220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-12-05 20:49:58.073251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.073437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.073468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.073712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.073743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.073860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.073892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.074012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.074043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.074160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.074192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.074396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.074428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.074681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.074714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.074829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.074861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.075046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.075088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.075295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.075327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.075509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.075540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.075653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.075685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.075909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.075941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.076072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.076105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.076305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.076337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.076542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.076574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.076755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.076787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.076914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.076946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.077160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.077193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.077375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.077412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.077604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.077636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.077748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.077780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.077992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.078023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.078233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.078264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.078381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.078412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.078602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.078633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.078807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.078838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.078956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.078988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.079176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.079208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.079403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.079435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.079706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.079737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.079860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-12-05 20:49:58.079891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.851 qpair failed and we were unable to recover it. 00:30:04.851 [2024-12-05 20:49:58.080076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.080109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.080249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.080280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.080496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.080527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.080729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.080760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.080978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.081010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.081138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.081170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.081295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.081327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.081449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.081480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.081604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.081635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.081878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.081909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.082089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.082136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.082379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.082410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.082602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.082634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.082848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.082879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.083100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.083134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.083259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.083291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.083501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.083533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.083716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.083748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.083852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.083883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.084012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.084044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.084231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.084262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.084382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.084414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.084523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.084553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.084658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.084689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.084883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.084914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.085182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.085212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.085334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.085366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.085641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.085679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.085804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.085835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.085939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.085970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.086086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.086118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.086301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.086333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.086529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.086560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.852 [2024-12-05 20:49:58.086841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-12-05 20:49:58.086872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.852 qpair failed and we were unable to recover it. 00:30:04.853 [2024-12-05 20:49:58.087096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.853 [2024-12-05 20:49:58.087128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.853 qpair failed and we were unable to recover it. 00:30:04.853 [2024-12-05 20:49:58.087306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.853 [2024-12-05 20:49:58.087338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.853 qpair failed and we were unable to recover it. 00:30:04.853 [2024-12-05 20:49:58.087529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.853 [2024-12-05 20:49:58.087561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.853 qpair failed and we were unable to recover it. 00:30:04.853 [2024-12-05 20:49:58.087679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.853 [2024-12-05 20:49:58.087711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.853 qpair failed and we were unable to recover it. 00:30:04.853 [2024-12-05 20:49:58.087922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.853 [2024-12-05 20:49:58.087953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.853 qpair failed and we were unable to recover it. 00:30:04.853 [2024-12-05 20:49:58.088070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.853 [2024-12-05 20:49:58.088103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.853 qpair failed and we were unable to recover it. 00:30:04.853 [2024-12-05 20:49:58.088237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.853 [2024-12-05 20:49:58.088269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.853 qpair failed and we were unable to recover it. 00:30:04.853 [2024-12-05 20:49:58.088399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.853 [2024-12-05 20:49:58.088430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.853 qpair failed and we were unable to recover it. 00:30:04.853 [2024-12-05 20:49:58.088556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.853 [2024-12-05 20:49:58.088587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.853 qpair failed and we were unable to recover it. 00:30:04.853 [2024-12-05 20:49:58.088838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.853 [2024-12-05 20:49:58.088871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.853 qpair failed and we were unable to recover it. 00:30:04.853 [2024-12-05 20:49:58.089073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.853 [2024-12-05 20:49:58.089107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.853 qpair failed and we were unable to recover it. 00:30:04.853 [2024-12-05 20:49:58.089235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.853 [2024-12-05 20:49:58.089267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.853 qpair failed and we were unable to recover it. 00:30:04.853 [2024-12-05 20:49:58.089538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.853 [2024-12-05 20:49:58.089570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.853 qpair failed and we were unable to recover it. 00:30:04.853 [2024-12-05 20:49:58.089831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.853 [2024-12-05 20:49:58.089863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.853 qpair failed and we were unable to recover it. 00:30:04.853 [2024-12-05 20:49:58.089977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.853 [2024-12-05 20:49:58.090008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.853 qpair failed and we were unable to recover it. 00:30:04.853 [2024-12-05 20:49:58.090219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.853 [2024-12-05 20:49:58.090253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.853 qpair failed and we were unable to recover it. 00:30:04.853 [2024-12-05 20:49:58.090444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.853 [2024-12-05 20:49:58.090476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.853 qpair failed and we were unable to recover it. 00:30:04.853 [2024-12-05 20:49:58.090773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.853 [2024-12-05 20:49:58.090804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.853 qpair failed and we were unable to recover it. 00:30:04.853 [2024-12-05 20:49:58.090921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.853 [2024-12-05 20:49:58.090953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.853 qpair failed and we were unable to recover it. 00:30:04.853 [2024-12-05 20:49:58.091219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.853 [2024-12-05 20:49:58.091252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.853 qpair failed and we were unable to recover it. 00:30:04.853 [2024-12-05 20:49:58.091458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.853 [2024-12-05 20:49:58.091491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.853 qpair failed and we were unable to recover it. 00:30:04.853 [2024-12-05 20:49:58.091684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.853 [2024-12-05 20:49:58.091715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.853 qpair failed and we were unable to recover it. 00:30:04.853 [2024-12-05 20:49:58.091859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.853 [2024-12-05 20:49:58.091891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.853 qpair failed and we were unable to recover it. 00:30:04.853 [2024-12-05 20:49:58.092012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.853 [2024-12-05 20:49:58.092042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.853 qpair failed and we were unable to recover it. 00:30:04.853 [2024-12-05 20:49:58.092246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.853 [2024-12-05 20:49:58.092277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.853 qpair failed and we were unable to recover it. 00:30:04.853 [2024-12-05 20:49:58.092553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.853 [2024-12-05 20:49:58.092584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.853 qpair failed and we were unable to recover it. 00:30:04.853 [2024-12-05 20:49:58.092774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.853 [2024-12-05 20:49:58.092806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.853 qpair failed and we were unable to recover it. 00:30:04.853 [2024-12-05 20:49:58.092993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.853 [2024-12-05 20:49:58.093024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.853 qpair failed and we were unable to recover it. 00:30:04.853 [2024-12-05 20:49:58.093219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.853 [2024-12-05 20:49:58.093252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.853 qpair failed and we were unable to recover it. 00:30:04.853 [2024-12-05 20:49:58.093356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.093387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.093573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.093604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.093794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.093825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.093949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.093981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.094168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.094209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.094455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.094487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.094680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.094712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.094836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.094867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.095047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.095097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.095207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.095238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.095363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.095395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.095593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.095625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.095801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.095833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.096018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.096049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.096194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.096225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.096355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.096387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.096633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.096665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.096848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.096880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.097115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.097148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.097347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.097378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.097497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.097528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.097798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.097830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.098017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.098048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.098170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.098203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.098321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.098351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.098456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.098489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.098661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.098693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.098807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.098838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.099068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.099100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.099230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.099262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.099449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.099480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.099802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.099869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.100003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.100038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.100243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.100276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.100474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.854 [2024-12-05 20:49:58.100506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.854 qpair failed and we were unable to recover it. 00:30:04.854 [2024-12-05 20:49:58.100679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.100711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.100894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.100926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.101117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.101151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.101252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.101283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.101418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.101450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.101625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.101657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.101783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.101815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.101929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.101961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.102133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.102165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.102417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.102448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.102599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.102629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.102798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.102830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.103024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.103055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.103371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.103403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.103601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.103633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.103882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.103913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.104169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.104201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.104342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.104373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.104582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.104614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.104741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.104772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.104952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.104984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.105196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.105229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.105431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.105462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.105633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.105671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.105806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.105838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.106022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.106052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.106195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.106228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.106343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.106375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.106487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.106519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.106650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.106681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.106875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.106906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.107153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.107184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.107410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.107441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.107613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.107645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.107779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.107811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.107939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.107970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.108213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.108246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.108372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.108403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.108577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.855 [2024-12-05 20:49:58.108609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.855 qpair failed and we were unable to recover it. 00:30:04.855 [2024-12-05 20:49:58.108812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.108844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.109024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.109055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.109191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.109223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.109357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.109388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.109505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.109537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.109651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.109682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.109955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.109986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.110094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.110127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.110304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.110335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.110519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.110550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.110676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.110708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.110822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.110860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.110989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.111021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.111133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.111165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.111346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.111377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.111501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.111533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.111772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.111803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.111976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.112007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.112231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.112262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.112460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.112492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.112610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.112642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.112764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.112794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.112897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.112928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.113038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.113078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.113198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.113230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.113418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.113450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.113549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.113579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.113709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.113740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.113861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.113892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.114086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.114119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.114317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.114348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.114458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.114489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.114603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.114635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.114754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.114785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.114896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.114928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.115036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.115088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.115294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.115325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.115594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.115625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.115730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.115767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.115950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.115981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.116159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.116192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.856 qpair failed and we were unable to recover it. 00:30:04.856 [2024-12-05 20:49:58.116307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.856 [2024-12-05 20:49:58.116337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.116535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.116566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.116698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.116730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.116899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.116930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.117134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.117166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.117340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.117372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.117500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.117532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.117778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.117810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.117986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.118017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.118176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.118210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.118316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.118347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.118501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.118572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.118777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.118814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.118987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.119019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.119154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.119188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.119385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.119417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.119602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.119634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.119832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.119865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.120116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.120150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.120449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.120481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.120602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.120633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.120889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.120921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.121028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.121068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.121198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.121229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.121362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.121404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.121521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.121553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.121833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.121865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.122037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.122078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.122206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.122238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.122415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.122447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.122620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.122652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.122833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.122864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.123044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.123084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.123281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.123313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.123507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.123539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.857 qpair failed and we were unable to recover it. 00:30:04.857 [2024-12-05 20:49:58.123785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.857 [2024-12-05 20:49:58.123816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.123998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.124030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.124170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.124202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.124327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.124359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.124547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.124578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.124760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.124792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.124912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.124944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.125073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.125106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.125218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.125250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.125368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.125401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.125649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.125680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.125825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.125857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.126097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.126131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.126259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.126290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.126409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.126440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.126683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.126715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.126884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.126955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.127090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.127128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.127342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.127375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.127552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.127583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.127759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.127790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.127923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.127954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.128072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.128105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.128348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.128380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.128512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.128544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.128726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.128758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.128862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.128893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.129018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.129050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.129255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.129287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.129456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.129497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.129622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.129653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.129841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.129872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.130053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.130097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.130293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.130324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.130581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.130612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.130793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.130824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.130939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.130970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.131113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.131145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.131332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.131363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.131622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.131654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.131846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.131878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.132026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.132067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.132245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.132277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.132400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.858 [2024-12-05 20:49:58.132431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.858 qpair failed and we were unable to recover it. 00:30:04.858 [2024-12-05 20:49:58.132643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.132675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.132911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.132941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.133083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.133115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.133217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.133248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.133424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.133454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.133638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.133669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.133957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.133988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.134181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.134213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.134338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.134369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.134490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.134522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.134646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.134677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.134785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.134817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.135080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.135151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.135364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.135400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.135576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.135608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.135735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.135766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.135877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.135909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.136024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.136055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.136261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.136293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.136409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.136440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.136613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.136645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.136757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.136788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.136900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.136931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.137106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.137140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.137310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.137341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.137545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.137585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.137717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.137749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.137935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.137966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.138082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.138115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.138302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.138334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.138562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.138594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.138725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.138756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.138859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.138891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.139071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.139104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.139241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.139273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.139449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.139480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.139685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.139716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.139825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.139859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.140040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.140081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.140222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.140254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.140366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.140397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.140523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.140555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.140727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.140758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.140874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.140906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.141021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.859 [2024-12-05 20:49:58.141052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.859 qpair failed and we were unable to recover it. 00:30:04.859 [2024-12-05 20:49:58.141311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.141342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.141458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.141490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.141609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.141639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.141767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.141799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.141922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.141953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.142105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.142138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.142328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.142361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.142538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.142610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.142827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.142863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.142985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.143017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.143231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.143265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.143441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.143472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.143673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.143704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.143829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.143860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.143976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.144007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.144194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.144227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.144486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.144517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.144706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.144739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.144856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.144888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.144994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.145025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.145143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.145175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.145295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.145327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.145434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.145466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.145592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.145623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.145867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.145900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.146082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.146114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.146298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.146329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.146517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.146548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.146739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.146770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.146903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.146935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.147073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.147106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.147224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.147257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.147425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.147456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.147584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.147616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.147717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.147755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.147954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.147985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.148171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.148205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.148399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.148431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.148602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.148633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.148741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.148772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.148948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.148980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.149094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.149126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.149262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.149293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.149497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.149528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.149704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.149737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.149923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.149954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.860 qpair failed and we were unable to recover it. 00:30:04.860 [2024-12-05 20:49:58.150074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.860 [2024-12-05 20:49:58.150106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.150209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.150242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.150443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.150475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.150651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.150682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.150785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.150817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.151005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.151037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.151240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.151272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.151396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.151427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.151696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.151728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.151843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.151874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.152090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.152124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.152229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.152261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.152368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.152399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.152589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.152620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.152892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.152923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.153025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.153072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.153288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.153320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.153421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.153453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.153641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.153672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.153781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.153812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.153945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.153976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.154148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.154181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.154426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.154457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.154646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.154677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.154948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.154980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.155196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.155229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.155334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.155366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.155484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.155516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.155616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.155647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.155826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.155857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.155969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.156000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.156123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.156164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.156358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.156390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.156493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.156524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.156651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.156682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.156883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.156914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.157121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.157153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.157360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.157392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.157513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.157543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.157788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.157819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.157995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.158027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.158156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.861 [2024-12-05 20:49:58.158190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.861 qpair failed and we were unable to recover it. 00:30:04.861 [2024-12-05 20:49:58.158377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.158408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.158515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.158547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.158677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.158710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.158910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.158941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.159054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.159094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.159213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.159245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.159465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.159496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.159760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.159791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.159912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.159943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.160071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.160104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.160282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.160313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.160457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.160489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.160756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.160788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.160968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.160999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.161138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.161170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.161416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.161447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.161627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.161659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.161836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.161868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.162140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.162174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.162416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.162448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.162558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.162589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.162717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.162748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.163002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.163034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.163286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.163319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.163456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.163488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.163754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.163785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.163955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.163986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.164176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.164209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.164395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.164427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.164613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.164645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.164846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.164878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.164983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.165013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.165127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.165160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.165329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.165361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.165484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.165515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.165704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.165735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.165849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.165880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.166129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.166162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.166430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.166463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.166646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.166678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.166860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.166892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.167163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.167207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.167388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.167419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.167532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.167564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.167744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.167775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.168020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.168052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.862 [2024-12-05 20:49:58.168167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.862 [2024-12-05 20:49:58.168198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.862 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.168456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.168487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.168697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.168728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.168997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.169030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.169163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.169194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.169313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.169344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.169530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.169561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.169689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.169720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.169937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.169969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.170199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.170233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.170359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.170390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.170604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.170635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.170818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.170850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.171045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.171085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.171197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.171228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.171412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.171443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.171569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.171600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.171707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.171738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.171866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.171897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.172113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.172145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.172257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.172288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.172412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.172443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.172619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.172656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.172788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.172819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.172990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.173021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.173238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.173270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.173461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.173493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.173605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.173636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.173760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.173792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.173987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.174018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.174236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.174269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.174509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.174539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.174710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.174742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.175023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.175053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.175332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.175363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.175483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.175514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.175640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.175672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.175840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.175871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.176002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.176033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.176169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.176201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.176380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.176412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.176620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.176650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.176773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.176805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.176933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.176964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.177080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.177113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.177369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.177400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.177655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.177687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.177861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.177892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.178004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.863 [2024-12-05 20:49:58.178036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.863 qpair failed and we were unable to recover it. 00:30:04.863 [2024-12-05 20:49:58.178152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.178189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.178313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.178344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.178611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.178642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.178816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.178847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.178961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.179005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.179138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.179170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.179361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.179393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.179518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.179550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.179728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.179759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.179867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.179898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.180030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.180072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.180253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.180284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.180391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.180423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.180611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.180642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.180833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.180864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.180988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.181020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.181151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.181184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.181306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.181337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.181531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.181563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.181737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.181768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.181871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.181902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.182019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.182050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.182306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.182338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.182449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.182481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.182597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.182628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.182751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.182781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.182885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.182916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.183112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.183145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.183322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.183353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.183472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.183504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.183617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.183648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.183761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.183792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.183971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.184002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.184215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.184248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.184362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.184393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.184504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.184534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.184659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.184691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.184869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.184901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.185013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.185044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.185160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.185192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.185305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.185336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.185572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.185644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.185786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.185822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.186000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.186032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.186192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.186233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.186420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.186452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.186558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.186589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.186773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.186804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.186913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.186945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.187088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.187121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.864 [2024-12-05 20:49:58.187246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.864 [2024-12-05 20:49:58.187278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.864 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.187463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.187493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.187679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.187710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.187831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.187861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.187970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.188011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.188215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.188247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.188370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.188402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.188528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.188560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.188749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.188781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.188965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.188997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.189113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.189146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.189263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.189294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.189415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.189446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.189623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.189655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.189825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.189856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.189976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.190007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.190212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.190245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.190360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.190391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.190516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.190549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.190664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.190694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.190875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.190905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.191026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.191069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.191252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.191284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.191395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.191426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.191534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.191567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.191693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.191724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.191826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.191858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.192041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.192084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.192203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.192234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.192348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.192380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.192571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.192603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.192723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.192760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.192931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.192962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.193156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.193189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.193294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.193326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.193450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.193481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.193584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.193615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.193734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.193765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.193947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.193978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.194099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.194131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.194304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.194336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.194517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.194549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.194733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.194763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.194878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.865 [2024-12-05 20:49:58.194910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.865 qpair failed and we were unable to recover it. 00:30:04.865 [2024-12-05 20:49:58.195029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.195071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.195255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.195288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.195466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.195497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.195628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.195660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.195793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.195825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.196002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.196033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.196215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.196247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.196428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.196460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.196577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.196608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.196783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.196814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.197035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.197076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.197195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.197226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.197346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.197378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.197490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.197521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.197735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.197767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.198092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.198125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.198314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.198345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.198530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.198561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.198787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.198818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.198944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.198976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.199161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.199194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.199297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.199329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.199454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.199485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.199752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.199783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.199885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.199916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.200107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.200140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.200244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.200275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.200585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.200622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.200745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.200777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.200887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.200918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.201164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.201197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.201380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.201411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.201525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.201556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.201689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.201721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.201897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.201928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.202199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.202232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.202429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.202460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.202590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.202621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.202750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.202782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.202979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.203011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.203137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.203169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.203357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.203390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.203570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.203601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.203717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.203747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.203871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.203902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.204023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.204054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.204209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.204241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.204430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.204461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.204571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.204602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.204812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.204844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.205026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-12-05 20:49:58.205056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.866 qpair failed and we were unable to recover it. 00:30:04.866 [2024-12-05 20:49:58.205278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.205310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.205424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.205456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.205777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.205808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.206053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.206098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.206222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.206254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.206448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.206479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.206683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.206715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.206839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.206870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.206981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.207013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.207234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.207266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.207477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.207509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.207726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.207758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.207937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.207969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.208081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.208114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.208378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.208409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.208533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.208563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.208684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.208722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.208841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.208873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.209042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.209084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.209261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.209293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.209469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.209501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.209676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.209708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.209948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.209978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.210170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.210202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.210415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.210446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.210632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.210664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.210922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.210954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.211081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.211114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.211296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.211328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.211510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.211541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.211672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.211704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.211922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.211954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.212199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.212232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.212375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.212408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.212590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.212622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.212815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.212847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.213033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.213074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.213322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.213353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.213523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.213555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.213682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.213714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.213826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.213858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.213990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.214022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.214153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.214185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.214377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.214409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.214517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.214549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.214674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.214705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.214889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.214920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.215029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.215082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.215329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.215360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.215474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.215505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.215676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.215706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.215883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-12-05 20:49:58.215915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.867 qpair failed and we were unable to recover it. 00:30:04.867 [2024-12-05 20:49:58.216113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.216146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.216325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.216356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.216534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.216566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.216680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.216712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.216826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.216864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.217077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.217110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.217372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.217404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.217594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.217626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.217744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.217775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.218017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.218047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.218258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.218290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.218497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.218528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.218711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.218743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.218861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.218892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.219089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.219122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.219240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.219273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.219452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.219483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.219604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.219636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.219886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.219918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.220136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.220167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.220381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.220412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.220523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.220555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.220815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.220846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.221087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.221121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.221301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.221332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.221508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.221539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.221671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.221701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.221902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.221933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.222068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.222101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.222348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.222380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.222673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.222706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.222897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.222930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.223078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.223110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.223285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.223317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.223438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.223469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.223677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.223708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.223889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.223921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.224173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.224207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.224388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.224420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.224537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.224568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.224683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.224715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.224907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.224938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.225114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.225146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.225327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.225360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.225485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.225523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.225702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.225734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.225931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.868 [2024-12-05 20:49:58.225964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.868 qpair failed and we were unable to recover it. 00:30:04.868 [2024-12-05 20:49:58.226091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.226123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.226343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.226375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.226510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.226542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.226799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.226830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.226960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.226992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.227235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.227268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.227449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.227481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.227609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.227639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.227810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.227842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.227970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.228002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.228205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.228238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.228364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.228397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.228568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.228600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.228840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.228871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.228984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.229016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.229223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.229255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.229450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.229481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.229669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.229700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.229816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.229848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.230043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.230087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.230262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.230294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.230483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.230516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.230632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.230664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.230768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.230800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.231019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.231051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.231251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.231282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.231390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.231421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.231548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.231581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.231751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.231783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.232013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.232045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.232167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.232199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.232444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.232477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.232739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.232771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.232887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.232919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.233044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.233085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.233259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.233289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.233502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.233534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.233706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.233744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.233937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.233969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.234107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.234140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.234311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.234342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.234550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.234582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.234758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.234788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.234894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.234926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.235051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.235093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.235288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.235319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.235491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.235523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.235638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.235670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.235854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.235885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.236071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.869 [2024-12-05 20:49:58.236104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.869 qpair failed and we were unable to recover it. 00:30:04.869 [2024-12-05 20:49:58.236225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.236257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.236393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.236425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.236530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.236561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.236661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.236692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.236792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.236822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.236998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.237030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.237299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.237369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.237566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.237601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.237715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.237747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.237871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.237903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.238130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.238163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.238440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.238472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.238593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.238623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.238815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.238847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.238963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.238994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.239230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.239263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.239370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.239402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.239521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.239554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.239809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.239839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.240028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.240068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.240268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.240300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.240426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.240458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.240574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.240605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.240726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.240757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.240882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.240914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.241112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.241144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.241348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.241381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.241585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.241622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.241755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.241787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.241990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.242021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.242156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.242190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.242307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.242339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.242447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.242478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.242666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.242698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.242833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.242865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.243071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.243104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.243223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.243254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.243436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.243468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.243653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.243685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.243861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.243893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.244013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.244044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.244178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.244211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.244392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.244423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.244538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.244570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.244668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.244698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.244812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.244844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.245038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.245083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.245189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.245221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.245338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.245369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.245541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.245574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.245728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.245758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.245935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.245966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.246082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.246116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.870 [2024-12-05 20:49:58.246221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.870 [2024-12-05 20:49:58.246252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.870 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.246493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.246525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.246736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.246767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.246881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.246913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.247037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.247077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.247291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.247323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.247509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.247540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.247676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.247707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.247885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.247916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.248164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.248195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.248324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.248356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.248533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.248563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.248681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.248712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.248836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.248868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.249054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.249100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.249212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.249244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.249569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.249600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.249731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.249762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.249939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.249970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.250140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.250174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.250345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.250376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.250659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.250690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.250806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.250837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.250955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.250986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.251098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.251130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.251400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.251432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.251538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.251568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.251842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.251874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.252128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.252161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.252311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.252342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.252456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.252486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.252606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.252638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.252778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.252809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.252921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.252953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.253081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.253114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.253301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.253332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:04.871 [2024-12-05 20:49:58.253449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.871 [2024-12-05 20:49:58.253480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:04.871 qpair failed and we were unable to recover it. 00:30:05.148 [2024-12-05 20:49:58.253660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.148 [2024-12-05 20:49:58.253693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.148 qpair failed and we were unable to recover it. 00:30:05.148 [2024-12-05 20:49:58.253868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.148 [2024-12-05 20:49:58.253901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.148 qpair failed and we were unable to recover it. 00:30:05.148 [2024-12-05 20:49:58.254033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.148 [2024-12-05 20:49:58.254079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.148 qpair failed and we were unable to recover it. 00:30:05.148 [2024-12-05 20:49:58.254208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.148 [2024-12-05 20:49:58.254239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.148 qpair failed and we were unable to recover it. 00:30:05.148 [2024-12-05 20:49:58.254505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.148 [2024-12-05 20:49:58.254575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.148 qpair failed and we were unable to recover it. 00:30:05.148 [2024-12-05 20:49:58.254793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.148 [2024-12-05 20:49:58.254830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.148 qpair failed and we were unable to recover it. 00:30:05.148 [2024-12-05 20:49:58.255024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.148 [2024-12-05 20:49:58.255056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.148 qpair failed and we were unable to recover it. 00:30:05.148 [2024-12-05 20:49:58.255190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.148 [2024-12-05 20:49:58.255223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.148 qpair failed and we were unable to recover it. 00:30:05.148 [2024-12-05 20:49:58.255463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.148 [2024-12-05 20:49:58.255494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.148 qpair failed and we were unable to recover it. 00:30:05.148 [2024-12-05 20:49:58.255678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.148 [2024-12-05 20:49:58.255710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.148 qpair failed and we were unable to recover it. 00:30:05.148 [2024-12-05 20:49:58.255826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.148 [2024-12-05 20:49:58.255856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.148 qpair failed and we were unable to recover it. 00:30:05.148 [2024-12-05 20:49:58.255989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.148 [2024-12-05 20:49:58.256020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.148 qpair failed and we were unable to recover it. 00:30:05.148 [2024-12-05 20:49:58.256153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.148 [2024-12-05 20:49:58.256186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.148 qpair failed and we were unable to recover it. 00:30:05.148 [2024-12-05 20:49:58.256376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.148 [2024-12-05 20:49:58.256407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.148 qpair failed and we were unable to recover it. 00:30:05.148 [2024-12-05 20:49:58.256542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.148 [2024-12-05 20:49:58.256573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.148 qpair failed and we were unable to recover it. 00:30:05.148 [2024-12-05 20:49:58.256675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.148 [2024-12-05 20:49:58.256706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.148 qpair failed and we were unable to recover it. 00:30:05.148 [2024-12-05 20:49:58.256813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.148 [2024-12-05 20:49:58.256844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.148 qpair failed and we were unable to recover it. 00:30:05.148 [2024-12-05 20:49:58.257026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.148 [2024-12-05 20:49:58.257080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.148 qpair failed and we were unable to recover it. 00:30:05.148 [2024-12-05 20:49:58.257192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.148 [2024-12-05 20:49:58.257223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.148 qpair failed and we were unable to recover it. 00:30:05.148 [2024-12-05 20:49:58.257337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.148 [2024-12-05 20:49:58.257369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.148 qpair failed and we were unable to recover it. 00:30:05.148 [2024-12-05 20:49:58.257648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.148 [2024-12-05 20:49:58.257679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.148 qpair failed and we were unable to recover it. 00:30:05.149 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 535852 Killed "${NVMF_APP[@]}" "$@" 00:30:05.149 [2024-12-05 20:49:58.257790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.257823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 [2024-12-05 20:49:58.258016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.258047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 [2024-12-05 20:49:58.258188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.258219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 [2024-12-05 20:49:58.258337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.258367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:05.149 [2024-12-05 20:49:58.258551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.258584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 [2024-12-05 20:49:58.258710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.258741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:05.149 [2024-12-05 20:49:58.258864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.258897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 [2024-12-05 20:49:58.259018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.259050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:05.149 [2024-12-05 20:49:58.259262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.259296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 [2024-12-05 20:49:58.259475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:05.149 [2024-12-05 20:49:58.259506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 [2024-12-05 20:49:58.259644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.259675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.149 [2024-12-05 20:49:58.259791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.259823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 [2024-12-05 20:49:58.260017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.260049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 [2024-12-05 20:49:58.260234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.260265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 [2024-12-05 20:49:58.260486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.260518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 [2024-12-05 20:49:58.260638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.260668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 [2024-12-05 20:49:58.260855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.260888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 [2024-12-05 20:49:58.261022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.261054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 [2024-12-05 20:49:58.261239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.261271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 [2024-12-05 20:49:58.261459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.261490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 [2024-12-05 20:49:58.261603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.261642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 [2024-12-05 20:49:58.261858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.261890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 [2024-12-05 20:49:58.262007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.262039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 [2024-12-05 20:49:58.262189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.262221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 [2024-12-05 20:49:58.262420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.262452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 [2024-12-05 20:49:58.262585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.262616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 [2024-12-05 20:49:58.262790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.262822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 [2024-12-05 20:49:58.262945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.262975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 [2024-12-05 20:49:58.263239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.263272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 [2024-12-05 20:49:58.263398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.263429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 [2024-12-05 20:49:58.263562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.263592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 [2024-12-05 20:49:58.263707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.263738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 [2024-12-05 20:49:58.263918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.149 [2024-12-05 20:49:58.263947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.149 qpair failed and we were unable to recover it. 00:30:05.149 [2024-12-05 20:49:58.264130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 [2024-12-05 20:49:58.264160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 [2024-12-05 20:49:58.264409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 [2024-12-05 20:49:58.264442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 [2024-12-05 20:49:58.264571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 [2024-12-05 20:49:58.264602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 [2024-12-05 20:49:58.264789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 [2024-12-05 20:49:58.264820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 [2024-12-05 20:49:58.264996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 [2024-12-05 20:49:58.265025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 [2024-12-05 20:49:58.265153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 [2024-12-05 20:49:58.265184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 [2024-12-05 20:49:58.265318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 [2024-12-05 20:49:58.265348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 [2024-12-05 20:49:58.265458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 [2024-12-05 20:49:58.265488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 [2024-12-05 20:49:58.265676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 [2024-12-05 20:49:58.265706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 [2024-12-05 20:49:58.265825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 [2024-12-05 20:49:58.265855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 [2024-12-05 20:49:58.265973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 [2024-12-05 20:49:58.266003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 [2024-12-05 20:49:58.266203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 [2024-12-05 20:49:58.266235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 [2024-12-05 20:49:58.266359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 [2024-12-05 20:49:58.266391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=536547 00:30:05.150 [2024-12-05 20:49:58.266504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 [2024-12-05 20:49:58.266536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 [2024-12-05 20:49:58.266687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 [2024-12-05 20:49:58.266718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 536547 00:30:05.150 [2024-12-05 20:49:58.266855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:05.150 [2024-12-05 20:49:58.266886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 [2024-12-05 20:49:58.267087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 [2024-12-05 20:49:58.267118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 536547 ']' 00:30:05.150 [2024-12-05 20:49:58.267239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 [2024-12-05 20:49:58.267269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 [2024-12-05 20:49:58.267385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 [2024-12-05 20:49:58.267414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.150 [2024-12-05 20:49:58.267603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 [2024-12-05 20:49:58.267633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 [2024-12-05 20:49:58.267756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:05.150 [2024-12-05 20:49:58.267785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 [2024-12-05 20:49:58.267916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 [2024-12-05 20:49:58.267945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 [2024-12-05 20:49:58.268071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.150 [2024-12-05 20:49:58.268104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 [2024-12-05 20:49:58.268239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 [2024-12-05 20:49:58.268269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:05.150 [2024-12-05 20:49:58.268399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 [2024-12-05 20:49:58.268432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 [2024-12-05 20:49:58.268543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 [2024-12-05 20:49:58.268574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.150 [2024-12-05 20:49:58.268755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 [2024-12-05 20:49:58.268789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 [2024-12-05 20:49:58.268895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 [2024-12-05 20:49:58.268927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 [2024-12-05 20:49:58.269050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 [2024-12-05 20:49:58.269098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 [2024-12-05 20:49:58.269299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 [2024-12-05 20:49:58.269330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 [2024-12-05 20:49:58.269441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 [2024-12-05 20:49:58.269470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 [2024-12-05 20:49:58.269619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.150 [2024-12-05 20:49:58.269686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.150 qpair failed and we were unable to recover it. 00:30:05.150 [2024-12-05 20:49:58.269810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.269842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.270032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.270081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.270191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.270223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.270351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.270382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.270518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.270558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.270735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.270764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.270891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.270921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.271097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.271128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.271311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.271343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.271481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.271512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.271638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.271670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.271785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.271815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.272002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.272033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.272165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.272195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.272325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.272354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.272452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.272482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.272588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.272619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.272844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.272881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.273014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.273044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.273179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.273209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.273332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.273362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.273480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.273510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.273697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.273729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.273844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.273876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.273994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.274024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.274147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.274182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.274492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.274559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.274758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.274794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.274902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.274933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.275053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.275100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.275202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.275231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.275447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.275478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.275600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.275630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.275874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.275906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.276011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.276041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.276229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.151 [2024-12-05 20:49:58.276260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.151 qpair failed and we were unable to recover it. 00:30:05.151 [2024-12-05 20:49:58.276456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.276488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.276609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.276640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.276742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.276770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.276898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.276928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.277043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.277086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.277217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.277247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.277489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.277522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.277638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.277668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.277832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.277868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.277972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.278003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.278262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.278296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.278419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.278453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.278573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.278605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.278782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.278816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.278921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.278951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.279147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.279181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.279304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.279337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.279455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.279487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.279670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.279700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.279885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.279917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.280042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.280083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.280227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.280263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.280377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.280408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.280541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.280570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.280691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.280721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.280835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.280864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.281082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.281114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.281232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.281263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.281468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.281496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.281620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.281651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.281775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.281806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.281912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.281942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.282070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.282102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.282223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.282253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.282426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.282457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.282639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.152 [2024-12-05 20:49:58.282669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.152 qpair failed and we were unable to recover it. 00:30:05.152 [2024-12-05 20:49:58.282886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.153 [2024-12-05 20:49:58.282916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.153 qpair failed and we were unable to recover it. 00:30:05.153 [2024-12-05 20:49:58.283200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.153 [2024-12-05 20:49:58.283232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.153 qpair failed and we were unable to recover it. 00:30:05.153 [2024-12-05 20:49:58.283334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.153 [2024-12-05 20:49:58.283364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.153 qpair failed and we were unable to recover it. 00:30:05.153 [2024-12-05 20:49:58.283555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.153 [2024-12-05 20:49:58.283585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.153 qpair failed and we were unable to recover it. 00:30:05.153 [2024-12-05 20:49:58.283687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.153 [2024-12-05 20:49:58.283718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.153 qpair failed and we were unable to recover it. 00:30:05.153 [2024-12-05 20:49:58.284014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.153 [2024-12-05 20:49:58.284044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.153 qpair failed and we were unable to recover it. 00:30:05.153 [2024-12-05 20:49:58.284181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.153 [2024-12-05 20:49:58.284212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.153 qpair failed and we were unable to recover it. 00:30:05.153 [2024-12-05 20:49:58.284333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.153 [2024-12-05 20:49:58.284363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.153 qpair failed and we were unable to recover it. 00:30:05.153 [2024-12-05 20:49:58.284564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.153 [2024-12-05 20:49:58.284594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.153 qpair failed and we were unable to recover it. 00:30:05.153 [2024-12-05 20:49:58.284766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.153 [2024-12-05 20:49:58.284796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.153 qpair failed and we were unable to recover it. 00:30:05.153 [2024-12-05 20:49:58.284911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.153 [2024-12-05 20:49:58.284941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.153 qpair failed and we were unable to recover it. 00:30:05.153 [2024-12-05 20:49:58.285043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.153 [2024-12-05 20:49:58.285087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.153 qpair failed and we were unable to recover it. 00:30:05.153 [2024-12-05 20:49:58.285214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.153 [2024-12-05 20:49:58.285250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.153 qpair failed and we were unable to recover it. 00:30:05.153 [2024-12-05 20:49:58.285362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.153 [2024-12-05 20:49:58.285391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.153 qpair failed and we were unable to recover it. 00:30:05.153 [2024-12-05 20:49:58.285577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.153 [2024-12-05 20:49:58.285607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.153 qpair failed and we were unable to recover it. 00:30:05.153 [2024-12-05 20:49:58.285709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.153 [2024-12-05 20:49:58.285738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.153 qpair failed and we were unable to recover it. 00:30:05.153 [2024-12-05 20:49:58.285855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.153 [2024-12-05 20:49:58.285883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.153 qpair failed and we were unable to recover it. 00:30:05.153 [2024-12-05 20:49:58.285990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.153 [2024-12-05 20:49:58.286020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.153 qpair failed and we were unable to recover it. 00:30:05.153 [2024-12-05 20:49:58.286151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.153 [2024-12-05 20:49:58.286184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.153 qpair failed and we were unable to recover it. 00:30:05.153 [2024-12-05 20:49:58.286294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.153 [2024-12-05 20:49:58.286325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.153 qpair failed and we were unable to recover it. 00:30:05.153 [2024-12-05 20:49:58.286632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.153 [2024-12-05 20:49:58.286662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.153 qpair failed and we were unable to recover it. 00:30:05.153 [2024-12-05 20:49:58.286909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.153 [2024-12-05 20:49:58.286940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.153 qpair failed and we were unable to recover it. 00:30:05.153 [2024-12-05 20:49:58.287055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.153 [2024-12-05 20:49:58.287097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.153 qpair failed and we were unable to recover it. 00:30:05.153 [2024-12-05 20:49:58.287394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.153 [2024-12-05 20:49:58.287425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.153 qpair failed and we were unable to recover it. 00:30:05.153 [2024-12-05 20:49:58.287555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.153 [2024-12-05 20:49:58.287585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.153 qpair failed and we were unable to recover it. 00:30:05.153 [2024-12-05 20:49:58.287899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.153 [2024-12-05 20:49:58.287936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.153 qpair failed and we were unable to recover it. 00:30:05.153 [2024-12-05 20:49:58.288110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.153 [2024-12-05 20:49:58.288143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.153 qpair failed and we were unable to recover it. 00:30:05.153 [2024-12-05 20:49:58.288334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.153 [2024-12-05 20:49:58.288364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.153 qpair failed and we were unable to recover it. 00:30:05.153 [2024-12-05 20:49:58.288494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.153 [2024-12-05 20:49:58.288524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.153 qpair failed and we were unable to recover it. 00:30:05.153 [2024-12-05 20:49:58.288711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.153 [2024-12-05 20:49:58.288742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.153 qpair failed and we were unable to recover it. 00:30:05.153 [2024-12-05 20:49:58.288876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.153 [2024-12-05 20:49:58.288907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.153 qpair failed and we were unable to recover it. 00:30:05.153 [2024-12-05 20:49:58.289107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.289153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.289420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.289452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.289590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.289621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.289723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.289753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.289955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.289986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.290161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.290193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.290364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.290393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.290507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.290537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.290723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.290754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.291007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.291038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.291183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.291213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.291341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.291371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.291510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.291540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.291722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.291752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.291942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.291972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.292216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.292249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.292424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.292455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.292561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.292592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.292708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.292738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.292918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.292948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.293147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.293180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.293356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.293426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.293622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.293657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.293777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.293809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.293944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.293975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.294143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.294176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.294357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.294388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.294517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.294548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.294826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.294857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.295032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.295072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.295201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.295231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.295341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.295372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.295559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.295590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.295835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.295865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.295969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.295999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.296201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.296234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.154 [2024-12-05 20:49:58.296371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.154 [2024-12-05 20:49:58.296401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.154 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.296527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.296557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.296727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.296758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.296956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.296986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.297112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.297143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.297332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.297363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.297473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.297503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.297628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.297658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.297761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.297793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.297893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.297923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.298053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.298097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.298215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.298246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.298358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.298395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.298566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.298597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.298769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.298798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.298988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.299020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.299212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.299243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.299361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.299392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.299512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.299543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.299722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.299752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.299868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.299898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.300081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.300115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.300234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.300265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.300394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.300425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.300631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.300663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.300863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.300893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.301096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.301129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.301399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.301430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.301545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.301575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.301764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.301794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.301977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.302007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.302132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.302164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.302272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.302301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.302536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.302566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.302684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.302715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.302922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.302953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.303140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.155 [2024-12-05 20:49:58.303173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.155 qpair failed and we were unable to recover it. 00:30:05.155 [2024-12-05 20:49:58.303345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.303375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.303548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.303579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.303701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.303737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.303916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.303946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.304142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.304173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.304277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.304308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.304408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.304439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.304557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.304587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.304773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.304804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.304981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.305012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.305262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.305295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.305471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.305502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.305603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.305634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.305733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.305764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.305957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.305987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.306105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.306156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.306281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.306312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.306505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.306537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.306650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.306681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.306804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.306835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.307019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.307050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.307169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.307198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.307307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.307338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.307512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.307544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.307644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.307675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.307779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.307809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.307911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.307940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.308134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.308167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.308287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.308317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.308501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.308536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.308650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.308682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.308795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.308826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.309001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.309031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.309381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.309450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.309650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.309686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.309799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.309832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.156 [2024-12-05 20:49:58.309947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.156 [2024-12-05 20:49:58.309979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.156 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.310165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.310198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.157 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.310454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.310486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.157 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.310731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.310763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.157 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.310873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.310904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.157 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.311099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.311132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.157 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.311250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.311282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.157 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.311427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.311458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.157 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.311647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.311678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.157 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.311779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.311808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.157 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.312000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.312032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.157 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.312228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.312265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.157 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.312406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.312437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.157 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.312624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.312655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.157 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.312895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.312926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.157 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.313119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.313152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.157 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.313261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.313290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.157 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.313502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.313533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.157 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.313719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.313750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.157 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.313886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.313917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.157 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.314099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.314139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.157 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.314270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.314302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.157 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.314524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.314554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.157 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.314684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.314715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.157 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.314846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.314877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.157 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.315054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.315109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.157 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.315280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.315311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.157 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.315498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.315529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.157 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.315648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.315679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.157 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.315805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.315835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.157 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.316029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.316068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.157 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.316200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.316231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.157 qpair failed and we were unable to recover it. 00:30:05.157 [2024-12-05 20:49:58.316331] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:30:05.157 [2024-12-05 20:49:58.316381] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.157 [2024-12-05 20:49:58.316415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.157 [2024-12-05 20:49:58.316453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.316566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.316596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.316786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.316815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.316944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.316973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.317087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.317118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.317306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.317335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.317537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.317568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.317813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.317843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.317968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.317999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.318196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.318228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.318343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.318373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.318556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.318588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.318778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.318810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.318918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.318947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.319077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.319109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.319219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.319248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.319370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.319405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.319615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.319647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.319824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.319855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.319974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.320005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.320205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.320238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.320361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.320393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.320577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.320608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.320794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.320825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.321012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.321043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.321229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.321261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.321372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.321402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.321586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.321622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.321823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.321865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.321992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.322023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.322161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.322194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.322498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.322530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.322641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.322671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.322866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.322897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.323078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.323112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.158 [2024-12-05 20:49:58.323210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.158 [2024-12-05 20:49:58.323240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.158 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.323338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.323369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.323475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.323506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.323745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.323776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.323893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.323925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.324116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.324150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.324335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.324368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.324491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.324523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.324754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.324787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.324903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.324934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.325079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.325111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.325219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.325249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.325351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.325382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.325498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.325530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.325644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.325676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.325874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.325905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.326092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.326125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.326226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.326257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.326369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.326400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.326615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.326650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.326841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.326876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.327034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.327075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.327184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.327215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.327331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.327363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.327484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.327516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.327619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.327650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.327754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.327787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.328036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.328081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.328277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.328308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.328523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.328554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.328758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.328790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.328909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.328940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.329122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.329155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.329289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.329321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.329591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.329622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.329816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.329848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.159 [2024-12-05 20:49:58.330051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.159 [2024-12-05 20:49:58.330097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.159 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.330225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.330257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.330447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.330479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.330602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.330634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.330834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.330866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.330978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.331010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.331124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.331156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.331279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.331310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.331553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.331584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.331703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.331735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.331865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.331900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.332024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.332056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.332274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.332305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.332482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.332514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.332627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.332659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.332833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.332865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.333039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.333083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.333268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.333300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.333489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.333520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.333643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.333675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.333956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.333988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.334099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.334131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.334246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.334278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.334406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.334445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.334563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.334595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.334775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.334807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.335077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.335110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.335218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.335247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.335363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.335395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.335573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.335604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.335714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.335746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.335938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.335970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.336179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.336212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.160 [2024-12-05 20:49:58.336324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.160 [2024-12-05 20:49:58.336354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.160 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.336628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.336659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.336773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.336805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.336987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.337020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.337212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.337245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.337460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.337492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.337678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.337710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.337894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.337926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.338128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.338173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.338307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.338338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.338473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.338504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.338727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.338757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.338960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.338991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.339207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.339239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.339350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.339385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.339490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.339520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.339714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.339745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.339877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.339913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.340087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.340119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.340393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.340425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.340539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.340576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.340763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.340794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.340900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.340930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.341124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.341157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.341266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.341295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.341463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.341494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.341593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.341624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.341729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.341760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.341869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.341899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.342088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.342120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.342256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.342293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.342459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.342490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.342603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.342634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.342900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.342932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.343067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.343101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.343211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.343241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.161 [2024-12-05 20:49:58.343482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.161 [2024-12-05 20:49:58.343512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.161 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.343629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.343660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.343797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.343828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.344027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.344071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.344252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.344283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.344397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.344427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.344607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.344638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.344748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.344778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.344970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.345001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.345120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.345151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.345254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.345285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.345503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.345534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.345801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.345832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.345967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.345997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.346214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.346247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.346385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.346416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.346550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.346581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.346695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.346725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.346927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.346958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.347137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.347169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.347299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.347329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.347446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.347476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.347595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.347626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.347745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.347776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.347954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.347984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.348140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.348172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.348280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.348310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.348420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.348450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.348560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.348591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.348766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.348798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.349009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.349039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.349241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.349273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.162 [2024-12-05 20:49:58.349393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.162 [2024-12-05 20:49:58.349424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.162 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.349524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.349555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.349749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.349786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.349889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.349918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.350016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.350046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.350274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.350306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.350511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.350541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.350658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.350688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.350973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.351004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.351248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.351279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.351464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.351494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.351617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.351648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.351784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.351814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.352070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.352102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.352218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.352248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.352521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.352551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.352660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.352691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.352805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.352835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.352941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.352972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.353142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.353174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.353282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.353312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.353433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.353463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.353664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.353695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.353869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.353900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.354013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.354042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.354322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.354354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.354477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.354506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.354680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.354711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.354893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.354922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.355057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.355099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.355318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.355348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.355536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.355566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.355761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.355792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.355975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.356006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.356197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.356229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.356349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.356379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.163 [2024-12-05 20:49:58.356497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.163 [2024-12-05 20:49:58.356527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.163 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.356645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.356676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.356857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.356887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.357008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.357039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.357170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.357203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.357388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.357419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.357524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.357565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.357755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.357785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.358002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.358033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.358159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.358190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.358367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.358397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.358605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.358636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.358747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.358778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.359048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.359101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.359216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.359245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.359365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.359394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.359510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.359540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.359730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.359760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.359884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.359914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.360094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.360126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.360343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.360374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.360499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.360530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.360653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.360683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.360873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.360903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.361005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.361035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.361292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.361323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.361444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.361475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.361581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.361611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.361785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.361815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.362094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.362126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.362306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.362337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.362552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.362583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.362696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.362725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.362943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.362974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.363111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.363142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.363323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.363354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.164 qpair failed and we were unable to recover it. 00:30:05.164 [2024-12-05 20:49:58.363462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.164 [2024-12-05 20:49:58.363492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.363664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.363695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.363812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.363843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.363972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.364003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.364212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.364245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.364421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.364452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.364573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.364602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.364797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.364828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.364932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.364961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.365081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.365113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.365292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.365328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.365515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.365546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.365672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.365702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.365827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.365857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.366031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.366070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.366180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.366210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.366316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.366347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.366559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.366590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.366797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.366827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.366955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.366986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.367118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.367149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.367257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.367286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.367467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.367498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.367684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.367716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.367911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.367941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.368070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.368102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.368226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.368256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.368480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.368510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.368644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.368675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.368792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.368820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.368995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.369026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.369157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.369199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.369309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.369339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.369453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.369485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.369589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.369620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.369820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.369852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.165 qpair failed and we were unable to recover it. 00:30:05.165 [2024-12-05 20:49:58.369967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.165 [2024-12-05 20:49:58.369999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.370229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.370263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.370372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.370401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.370521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.370550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.370730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.370760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.370874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.370903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.371001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.371031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.371229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.371265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.371439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.371471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.371674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.371705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.371963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.371994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.372207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.372240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.372363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.372395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.372514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.372544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.372661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.372699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.372801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.372831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.372939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.372969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.373152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.373185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.373452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.373483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.373666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.373697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.373849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.373881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.374003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.374033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.374157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.374189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.374307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.374338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.374513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.374544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.374750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.374783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.374963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.374995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.375172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.375204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.375317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.375349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.375520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.375552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.375674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.375705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.375837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.375869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.375978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.376009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.376290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.376324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.376501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.376532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.376701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.376732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.166 qpair failed and we were unable to recover it. 00:30:05.166 [2024-12-05 20:49:58.376922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.166 [2024-12-05 20:49:58.376954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.377143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.377177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.377290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.377322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.377434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.377466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.377645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.377677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.377882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.377913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.378114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.378147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.378256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.378286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.378400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.378431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.378572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.378604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.378716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.378747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.378968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.378999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.379176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.379209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.379456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.379489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.379592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.379623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.379819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.379850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.380070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.380103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.380227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.380260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.380487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.380526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.380719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.380751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.380930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.380961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.381079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.381112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.381408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.381440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.381683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.381714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.381828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.381858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.382101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.382135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.382318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.382350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.382540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.382571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.382758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.382789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.382959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.382991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.383165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.383196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.383316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.383346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.383528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.167 [2024-12-05 20:49:58.383559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.167 qpair failed and we were unable to recover it. 00:30:05.167 [2024-12-05 20:49:58.383763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.383795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.383966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.383996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.384206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.384237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.384441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.384471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.384573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.384603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.384714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.384744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.384875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.384905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.385110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.385141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.385244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.385274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.385377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.385405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.385606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.385638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.385809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.385841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.385985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.386023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.386147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.386183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.386376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.386406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.386521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.386552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.386781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.386811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.386919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.386950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.387069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.387099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.387283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.387313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.387427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.387457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.387583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.387614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.387722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.387751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.387919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.387950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.388138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.388171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.388417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.388454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.388716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.388747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.388865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.388896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.389003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.389034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.389214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.389248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.389382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.389414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.389593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.389624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.389738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.389768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.389985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.390016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.390271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.390303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.168 [2024-12-05 20:49:58.390510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.168 [2024-12-05 20:49:58.390542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.168 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.390645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.390675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.390888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.390919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.391043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.391082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.391207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.391239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.391413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.391444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.391658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.391689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.391863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.391894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.392026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.392070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.392192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.392224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.392343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.392375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.392487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.392519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.392594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:05.169 [2024-12-05 20:49:58.392764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.392796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.392929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.392961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.393091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.393122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.393233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.393263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.393442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.393472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.393660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.393692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.393884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.393915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.394035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.394076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.394253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.394285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.394392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.394424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.394708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.394739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.394922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.394954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.395139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.395170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.395274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.395304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.395418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.395447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.395626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.395656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.395864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.395895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.396031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.396069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.396204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.396252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.396385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.396418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.396540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.396573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.396755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.396786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.396962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.396994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.397121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.397154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.169 qpair failed and we were unable to recover it. 00:30:05.169 [2024-12-05 20:49:58.397255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.169 [2024-12-05 20:49:58.397285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.397453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.397486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.397620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.397652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.397846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.397878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.398009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.398041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.398242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.398276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.398388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.398420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.398610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.398642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.398763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.398795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.398908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.398939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.399178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.399212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.399336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.399367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.399539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.399572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.399767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.399799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.399983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.400014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.400206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.400240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.400368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.400400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.400643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.400675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.400801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.400831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.400947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.400977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.401092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.401124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.401237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.401275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.401491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.401524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.401659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.401692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.401864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.401895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.402182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.402216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.402482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.402513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.402626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.402658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.402830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.402862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.402977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.403009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.403140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.403173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.403423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.403454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.403649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.403682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.403857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.403888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.404072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.404106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.404302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.404335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.404606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.170 [2024-12-05 20:49:58.404639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.170 qpair failed and we were unable to recover it. 00:30:05.170 [2024-12-05 20:49:58.404913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.404946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.405053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.405098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.405246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.405278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.405462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.405494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.405682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.405714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.405985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.406017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.406158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.406191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.406312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.406345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.406545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.406577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.406765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.406798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.406920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.406952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.407089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.407128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.407249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.407280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.407480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.407511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.407772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.407803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.407939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.407970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.408093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.408139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.408245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.408276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.408483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.408514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.408627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.408659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.408779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.408810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.409001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.409033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.409212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.409244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.409419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.409451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.409562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.409593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.409713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.409745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.409990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.410022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.410157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.410189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.410384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.410416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.410543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.410574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.410749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.410780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.410886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.410917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.411096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.411129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.411240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.411271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.411536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.411567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.411756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.171 [2024-12-05 20:49:58.411788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.171 qpair failed and we were unable to recover it. 00:30:05.171 [2024-12-05 20:49:58.412002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.412034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.412247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.412280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.412478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.412516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.412689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.412721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.412960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.412991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.413146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.413179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.413438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.413470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.413646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.413677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.413877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.413908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.414099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.414133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.414251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.414282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.414469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.414500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.414684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.414716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.414904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.414936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.415120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.415153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.415268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.415300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.415541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.415588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.415784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.415815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.415937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.415968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.416079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.416113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.416222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.416255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.416374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.416412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.416587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.416619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.416788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.416820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.417040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.417103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.417353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.417386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.417568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.417600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.417798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.417829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.418037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.418081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.418273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.418320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.418526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.418557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.418687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.418719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.172 [2024-12-05 20:49:58.418890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.172 [2024-12-05 20:49:58.418922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.172 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.419050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.419096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.419371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.419402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.419587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.419619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.419807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.419839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.419957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.419989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.420167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.420199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.420317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.420348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.420516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.420546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.420659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.420691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.420869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.420900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.421099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.421131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.421306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.421338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.421520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.421552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.421667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.421698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.421895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.421926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.422047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.422091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.422217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.422248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.422492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.422524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.422643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.422675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.422862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.422894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.423078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.423111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.423237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.423268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.423461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.423492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.423693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.423732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.423927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.423959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.424141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.424173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.424314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.424346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.424457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.424490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.424602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.424633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.424740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.424772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.424948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.424979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.425120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.425152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.425321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.425354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.425472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.425504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.425614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.425646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.173 [2024-12-05 20:49:58.425821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.173 [2024-12-05 20:49:58.425853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.173 qpair failed and we were unable to recover it. 00:30:05.174 [2024-12-05 20:49:58.425977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.174 [2024-12-05 20:49:58.426014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.174 qpair failed and we were unable to recover it. 00:30:05.174 [2024-12-05 20:49:58.426139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.174 [2024-12-05 20:49:58.426171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.174 qpair failed and we were unable to recover it. 00:30:05.174 [2024-12-05 20:49:58.426290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.174 [2024-12-05 20:49:58.426322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.174 qpair failed and we were unable to recover it. 00:30:05.174 [2024-12-05 20:49:58.426548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.174 [2024-12-05 20:49:58.426579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.174 qpair failed and we were unable to recover it. 00:30:05.174 [2024-12-05 20:49:58.426756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.174 [2024-12-05 20:49:58.426787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.174 qpair failed and we were unable to recover it. 00:30:05.174 [2024-12-05 20:49:58.426929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.174 [2024-12-05 20:49:58.426961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.174 qpair failed and we were unable to recover it. 00:30:05.174 [2024-12-05 20:49:58.427081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.174 [2024-12-05 20:49:58.427114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.174 qpair failed and we were unable to recover it. 00:30:05.174 [2024-12-05 20:49:58.427320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.174 [2024-12-05 20:49:58.427351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.174 qpair failed and we were unable to recover it. 00:30:05.174 [2024-12-05 20:49:58.427540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.174 [2024-12-05 20:49:58.427571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.174 qpair failed and we were unable to recover it. 00:30:05.174 [2024-12-05 20:49:58.427769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.174 [2024-12-05 20:49:58.427799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.174 qpair failed and we were unable to recover it. 00:30:05.174 [2024-12-05 20:49:58.427972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.174 [2024-12-05 20:49:58.428003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.174 qpair failed and we were unable to recover it. 00:30:05.174 [2024-12-05 20:49:58.428118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.174 [2024-12-05 20:49:58.428150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.174 qpair failed and we were unable to recover it. 00:30:05.174 [2024-12-05 20:49:58.428333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.174 [2024-12-05 20:49:58.428364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.174 qpair failed and we were unable to recover it. 00:30:05.174 [2024-12-05 20:49:58.428558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.174 [2024-12-05 20:49:58.428589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.174 qpair failed and we were unable to recover it. 00:30:05.174 [2024-12-05 20:49:58.428794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.174 [2024-12-05 20:49:58.428826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.174 qpair failed and we were unable to recover it. 00:30:05.174 [2024-12-05 20:49:58.429016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.174 [2024-12-05 20:49:58.429048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.174 qpair failed and we were unable to recover it. 00:30:05.174 [2024-12-05 20:49:58.429183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.174 [2024-12-05 20:49:58.429215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.174 qpair failed and we were unable to recover it. 00:30:05.174 [2024-12-05 20:49:58.429388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.174 [2024-12-05 20:49:58.429418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.174 qpair failed and we were unable to recover it. 00:30:05.174 [2024-12-05 20:49:58.429532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.174 [2024-12-05 20:49:58.429563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.174 qpair failed and we were unable to recover it. 00:30:05.174 [2024-12-05 20:49:58.429738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.174 [2024-12-05 20:49:58.429771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.174 qpair failed and we were unable to recover it. 00:30:05.174 [2024-12-05 20:49:58.429957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.174 [2024-12-05 20:49:58.429988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.174 qpair failed and we were unable to recover it. 00:30:05.174 [2024-12-05 20:49:58.430101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.174 [2024-12-05 20:49:58.430134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.174 qpair failed and we were unable to recover it. 00:30:05.174 [2024-12-05 20:49:58.430250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.174 [2024-12-05 20:49:58.430282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.174 qpair failed and we were unable to recover it. 00:30:05.174 [2024-12-05 20:49:58.430495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.174 [2024-12-05 20:49:58.430527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.174 qpair failed and we were unable to recover it. 00:30:05.174 [2024-12-05 20:49:58.430714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.174 [2024-12-05 20:49:58.430746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.174 qpair failed and we were unable to recover it. 00:30:05.174 [2024-12-05 20:49:58.430874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.174 [2024-12-05 20:49:58.430906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.174 qpair failed and we were unable to recover it. 00:30:05.174 [2024-12-05 20:49:58.431009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.174 [2024-12-05 20:49:58.431041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.174 qpair failed and we were unable to recover it. 00:30:05.174 [2024-12-05 20:49:58.431240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.174 [2024-12-05 20:49:58.431282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.174 qpair failed and we were unable to recover it. 00:30:05.174 [2024-12-05 20:49:58.431463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.174 [2024-12-05 20:49:58.431495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.431701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.431733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.431850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.431881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.432123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.432156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.432271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.432301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.432422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.432453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.432730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.432760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.432955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.432985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.433113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.433143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.433245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.433273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.433460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.433491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.433664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.433693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.433824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.433858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.434150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.434182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.434358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.434388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.434580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.434611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.434820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.434850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.435036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.435075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.435191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.435220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.435347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.435375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.435505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.435533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.435774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.435805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.435979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.436008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.436281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.436312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.436436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.436464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.436662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.436693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.436816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.436846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.437022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.437052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.437179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.437208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.437343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.437373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.437550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.437578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.437686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.437714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.437900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.437930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.438051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.438090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.438306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.438334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.438505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.175 [2024-12-05 20:49:58.438535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.175 qpair failed and we were unable to recover it. 00:30:05.175 [2024-12-05 20:49:58.438729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.438758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.438951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.438981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.439151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.439182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.439321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.439358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.439646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.439677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.439874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.439904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.440039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.440081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.440259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.440289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.440584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.440615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.440734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.440764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.441009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.441043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.441181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.441212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.441340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.441371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.441485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.441515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.441636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.441666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.441942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.441973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.442096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.442128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.442253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.442284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.442462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.442494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.442607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.442639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.442813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.442845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.443073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.443106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.443213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.443244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.443372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.443404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.443596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.443629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.443845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.443876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.443988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.444019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.176 [2024-12-05 20:49:58.444028] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.444063] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.176 [2024-12-05 20:49:58.444076] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:05.176 [2024-12-05 20:49:58.444085] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:05.176 [2024-12-05 20:49:58.444093] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.176 [2024-12-05 20:49:58.444212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.444246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.444427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.444462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.444645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.444674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.444801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.444830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.444938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.444967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.445215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.445247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.445517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.445549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.176 qpair failed and we were unable to recover it. 00:30:05.176 [2024-12-05 20:49:58.445660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.176 [2024-12-05 20:49:58.445691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-12-05 20:49:58.445871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-12-05 20:49:58.445901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-12-05 20:49:58.446018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-12-05 20:49:58.446046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-12-05 20:49:58.446169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-12-05 20:49:58.446199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-12-05 20:49:58.446293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:05.177 [2024-12-05 20:49:58.446471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-12-05 20:49:58.446405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:05.177 [2024-12-05 20:49:58.446504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-12-05 20:49:58.446516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:05.177 [2024-12-05 20:49:58.446517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:05.177 [2024-12-05 20:49:58.446768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-12-05 20:49:58.446799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-12-05 20:49:58.446921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-12-05 20:49:58.446963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-12-05 20:49:58.447153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-12-05 20:49:58.447192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-12-05 20:49:58.447392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-12-05 20:49:58.447424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-12-05 20:49:58.447532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-12-05 20:49:58.447562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-12-05 20:49:58.447704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-12-05 20:49:58.447736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-12-05 20:49:58.447875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-12-05 20:49:58.447907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-12-05 20:49:58.448024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-12-05 20:49:58.448056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-12-05 20:49:58.448274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-12-05 20:49:58.448306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-12-05 20:49:58.448485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-12-05 20:49:58.448516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-12-05 20:49:58.448644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-12-05 20:49:58.448675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-12-05 20:49:58.448790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-12-05 20:49:58.448822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-12-05 20:49:58.449047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-12-05 20:49:58.449091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-12-05 20:49:58.449268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-12-05 20:49:58.449299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-12-05 20:49:58.449484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-12-05 20:49:58.449516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-12-05 20:49:58.449720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-12-05 20:49:58.449753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-12-05 20:49:58.449941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-12-05 20:49:58.449974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-12-05 20:49:58.450186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-12-05 20:49:58.450219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-12-05 20:49:58.450334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-12-05 20:49:58.450366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-12-05 20:49:58.450537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-12-05 20:49:58.450569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-12-05 20:49:58.450701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-12-05 20:49:58.450733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-12-05 20:49:58.450922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-12-05 20:49:58.450954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-12-05 20:49:58.451072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-12-05 20:49:58.451105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-12-05 20:49:58.451221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-12-05 20:49:58.451251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-12-05 20:49:58.451358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-12-05 20:49:58.451390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.177 [2024-12-05 20:49:58.451509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.177 [2024-12-05 20:49:58.451542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.177 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.451734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.451767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.451962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.451994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.452156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.452200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.452436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.452469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.452643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.452675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.452804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.452836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.453094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.453128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.453394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.453427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.453674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.453706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.453820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.453851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.454042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.454085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.454326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.454358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.454571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.454602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.454794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.454825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.455008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.455040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.455288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.455326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.455462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.455493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.455695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.455727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.455844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.455875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.456050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.456089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.456276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.456307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.456484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.456516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.456706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.456738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.456860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.456893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.457084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.457118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.457324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.457355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.457477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.457509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.457728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.457761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.457879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.457911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.458021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.458052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.458255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.458286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.458473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.458505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.458624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.458655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.458857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.458889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.459090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.178 [2024-12-05 20:49:58.459124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.178 qpair failed and we were unable to recover it. 00:30:05.178 [2024-12-05 20:49:58.459236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.459267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.459507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.459539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.459745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.459778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.459961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.459993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.460118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.460150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.460253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.460285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.460578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.460609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.460931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.460975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.461133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.461170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.461349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.461380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.461629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.461662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.461844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.461877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.462069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.462102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.462280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.462311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.462522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.462554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.462725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.462756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.462936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.462969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.463214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.463247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.463462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.463493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.463690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.463720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.463838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.463870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.464003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.464034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.464220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.464252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.464466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.464499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.464704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.464735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.464925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.464957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.465207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.465243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.465423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.465454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.465660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.465693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.465980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.466012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.466127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.466159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.466355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.466386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.466557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.466589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.466791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.466824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.466952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.466991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.467220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.467261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.179 qpair failed and we were unable to recover it. 00:30:05.179 [2024-12-05 20:49:58.467392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.179 [2024-12-05 20:49:58.467425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.467666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.467698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.467809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.467842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.467983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.468015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.468218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.468251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.468369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.468400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.468610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.468642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.468769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.468801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.468940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.468971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.469164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.469199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.469336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.469368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.469587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.469620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.469814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.469846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.469959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.469990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.470186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.470219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.470344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.470375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.470590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.470622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.470728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.470760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.470870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.470902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.471011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.471042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.471227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.471260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.471370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.471402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.471507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.471538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.471718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.471750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.471938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.471970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.472163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.472195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.472374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.472405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.472535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.472566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.472748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.472781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.472905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.472936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.473184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.473215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.473324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.473356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.473550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.473582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.473765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.473796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.474003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.474035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.474265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.180 [2024-12-05 20:49:58.474298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.180 qpair failed and we were unable to recover it. 00:30:05.180 [2024-12-05 20:49:58.474428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.474460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.474639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.474672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.474843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.474882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.475007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.475039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.475177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.475209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.475424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.475457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.475707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.475738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.475915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.475946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.476211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.476245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.476372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.476403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.476692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.476725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.476906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.476938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.477080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.477113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.477236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.477268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.477512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.477546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.477716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.477748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.478025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.478069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.478187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.478218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.478332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.478363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.478534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.478568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.478688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.478720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.478902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.478934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.479111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.479144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.479263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.479296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.479411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.479443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.479572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.479605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.479781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.479813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.479933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.479964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.480159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.480197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.480311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.480342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.480462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.480493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.480624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.480655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.480826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.480857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.481049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.481091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.481215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.481246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.481356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.181 [2024-12-05 20:49:58.481387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.181 qpair failed and we were unable to recover it. 00:30:05.181 [2024-12-05 20:49:58.481563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.481595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.481789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.481822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.481931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.481964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.482075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.482108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.482326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.482359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.482546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.482577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.482681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.482717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.482966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.482998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.483258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.483293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.483412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.483444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.483637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.483669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.483927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.483960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.484079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.484112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.484232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.484265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.484376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.484409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.484584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.484617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.484790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.484822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.485006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.485038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.485183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.485215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.485344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.485379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.485591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.485626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.485807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.485840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.485948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.485980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.486098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.486132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.486390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.486423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.486668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.486700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.486886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.486918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.487045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.487085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.487199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.487231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.487448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.487481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.487724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.487757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.487896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.182 [2024-12-05 20:49:58.487929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.182 qpair failed and we were unable to recover it. 00:30:05.182 [2024-12-05 20:49:58.488125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-12-05 20:49:58.488159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-12-05 20:49:58.488367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-12-05 20:49:58.488399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-12-05 20:49:58.488520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-12-05 20:49:58.488552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-12-05 20:49:58.488672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-12-05 20:49:58.488705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-12-05 20:49:58.488837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-12-05 20:49:58.488869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-12-05 20:49:58.489000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-12-05 20:49:58.489032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-12-05 20:49:58.489175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-12-05 20:49:58.489208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-12-05 20:49:58.489405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-12-05 20:49:58.489460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-12-05 20:49:58.489569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-12-05 20:49:58.489602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-12-05 20:49:58.489719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-12-05 20:49:58.489750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-12-05 20:49:58.489930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-12-05 20:49:58.489962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-12-05 20:49:58.490080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-12-05 20:49:58.490113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-12-05 20:49:58.490238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-12-05 20:49:58.490270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-12-05 20:49:58.490388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-12-05 20:49:58.490420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-12-05 20:49:58.490523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-12-05 20:49:58.490563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-12-05 20:49:58.490683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-12-05 20:49:58.490715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-12-05 20:49:58.490832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-12-05 20:49:58.490862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-12-05 20:49:58.490981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-12-05 20:49:58.491013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-12-05 20:49:58.491130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-12-05 20:49:58.491162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-12-05 20:49:58.491344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-12-05 20:49:58.491374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-12-05 20:49:58.491555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-12-05 20:49:58.491587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-12-05 20:49:58.491703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-12-05 20:49:58.491735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-12-05 20:49:58.491851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-12-05 20:49:58.491884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-12-05 20:49:58.492056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-12-05 20:49:58.492096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-12-05 20:49:58.492231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-12-05 20:49:58.492262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-12-05 20:49:58.492453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-12-05 20:49:58.492485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-12-05 20:49:58.492662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-12-05 20:49:58.492694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-12-05 20:49:58.492801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-12-05 20:49:58.492833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-12-05 20:49:58.492944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-12-05 20:49:58.492976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.183 [2024-12-05 20:49:58.493094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.183 [2024-12-05 20:49:58.493127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.183 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.493314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.493346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.493463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.493493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.493598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.493630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.493737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.493768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.493874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.493905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.494093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.494126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.494329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.494361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.494542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.494575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.494686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.494718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.494897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.494929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.495120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.495152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.495277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.495310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.495600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.495632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.495806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.495837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.495949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.495981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.496114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.496147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.496399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.496431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.496624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.496656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.496774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.496805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.496984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.497015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.497146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.497180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.497317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.497348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.497524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.497556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.497741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.497774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.497885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.497923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.498116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.498172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.498286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.498318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.498589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.498623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.498792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.498824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.499077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.499110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.499299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.499330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.499523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.499555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.499684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.499715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.499899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.499930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.500107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.500140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.184 [2024-12-05 20:49:58.500274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.184 [2024-12-05 20:49:58.500306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.184 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.500420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.500452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.500624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.500657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.500794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.500825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.501004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.501036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.501287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.501323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.501545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.501580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.501829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.501862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.502101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.502134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.502273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.502308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.502524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.502559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.502672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.502703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.502882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.502913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.503026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.503070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.503209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.503241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.503448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.503481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.503611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.503644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.503783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.503817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.504002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.504035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.504166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.504199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.504450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.504482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.504597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.504628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.504851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.504883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.505126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.505160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.505439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.505470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.505643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.505675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.505783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.505814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.505931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.505963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.506190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.506224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.506442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.506481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.506599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.506631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.506815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.506846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.507055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.507098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.507384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.507416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.507690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.507723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.507966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.507999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.185 qpair failed and we were unable to recover it. 00:30:05.185 [2024-12-05 20:49:58.508120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.185 [2024-12-05 20:49:58.508153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.508336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.508369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.508608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.508640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.508824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.508857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.508963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.508995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.509179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.509214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.509420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.509452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.509650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.509682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.509872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.509904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.510150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.510182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.510377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.510409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.510666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.510697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.510804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.510835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.510953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.510985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.511174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.511207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.511332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.511364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.511544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.511576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.511756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.511789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.512034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.512072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.512196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.512228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.512338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.512369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.512659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.512691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.512890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.512922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.513044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.513086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.513262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.513294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.513483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.513515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.513761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.513793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.513928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.513958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.514096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.514129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.514303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.514335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.514504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.514534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.514719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.514750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.514865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.186 [2024-12-05 20:49:58.514894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.186 qpair failed and we were unable to recover it. 00:30:05.186 [2024-12-05 20:49:58.515010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.515047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.515303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.515333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.515604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.515636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.515906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.515937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.516070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.516103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.516297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.516328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.516512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.516543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.516643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.516677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.516865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.516897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.517075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.517106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.517228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.517259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.517410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.517439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.517664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.517694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.517825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.517854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.517966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.517996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.518180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.518212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.518388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.518417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.518590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.518619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.518825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.518857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.519129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.519162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.519356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.519387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.519562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.519594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.519769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.519801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.520018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.520048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.520252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.520283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.520470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.520502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.520712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.520743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.520936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.520968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.521202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.521235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.521406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.521438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.521688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.521720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.521843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.187 [2024-12-05 20:49:58.521874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.187 qpair failed and we were unable to recover it. 00:30:05.187 [2024-12-05 20:49:58.522166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.522199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.522326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.522358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.522538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.522567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.522693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.522723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.522937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.522968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.523240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.523273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.523458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.523491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.523601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.523631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.523756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.523797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.523989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.524019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.524214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.524247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.524430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.524461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.524576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.524607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.524787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.524819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.525083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.525116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.525288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.525319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.525517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.525548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.525843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.525875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.526162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.526194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.526371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.526400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.526584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.526615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.526756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.526787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.527041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.527085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.527194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.527226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.527397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.527428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.527551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.527580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.527686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.527716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.527915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.527945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.528123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.528156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.528365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.528396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.528504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.528535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.528781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.528812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.528985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.529016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.529212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.529245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.529362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.529392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c08000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.188 [2024-12-05 20:49:58.529537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.188 [2024-12-05 20:49:58.529588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.188 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.529804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.529835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.530012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.530043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.530364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.530396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.530588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.530620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.530733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.530764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.530973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.531004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.531287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.531320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.531436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.531465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.531649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.531680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.531868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.531899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.532018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.532049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.532233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.532264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.532528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.532567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.532680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.532711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.532917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.532948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.533161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.533194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.533318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.533348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.533541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.533572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.533886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.533917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.534031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.534070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.534263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.534294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.534484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.534514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.534786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.534817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.534995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.535026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.535264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.535295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.535487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.535519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.535725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.535757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.536003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.536034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.536237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.536268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.536481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.536512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.536755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.536786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.537037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.537080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.537330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.537361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.537546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.537578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.537845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.537876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.538004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.189 [2024-12-05 20:49:58.538036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.189 qpair failed and we were unable to recover it. 00:30:05.189 [2024-12-05 20:49:58.538303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.538335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-12-05 20:49:58.538634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.538664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-12-05 20:49:58.538856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.538888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-12-05 20:49:58.539118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.539167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-12-05 20:49:58.539306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.539339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-12-05 20:49:58.539613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.539645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-12-05 20:49:58.539919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.539951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-12-05 20:49:58.540132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.540166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-12-05 20:49:58.540343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.540375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-12-05 20:49:58.540503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.540534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-12-05 20:49:58.540746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.540778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-12-05 20:49:58.540897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.540928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-12-05 20:49:58.541197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.541231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-12-05 20:49:58.541497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.541528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-12-05 20:49:58.541719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.541751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-12-05 20:49:58.541940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.541972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-12-05 20:49:58.542091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.542123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-12-05 20:49:58.542330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.542363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-12-05 20:49:58.542502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.542533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-12-05 20:49:58.542706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.542737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-12-05 20:49:58.542925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.542957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-12-05 20:49:58.543227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.543263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-12-05 20:49:58.543475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.543507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-12-05 20:49:58.543688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.543719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-12-05 20:49:58.543904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.543935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-12-05 20:49:58.544203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.544236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-12-05 20:49:58.544410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.544442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-12-05 20:49:58.544656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.544687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-12-05 20:49:58.544790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.544820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:05.190 [2024-12-05 20:49:58.545037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.545081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-12-05 20:49:58.545231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.545264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.190 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.190 [2024-12-05 20:49:58.545484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.190 [2024-12-05 20:49:58.545516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.190 qpair failed and we were unable to recover it. 00:30:05.191 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:05.191 [2024-12-05 20:49:58.545784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.545816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:05.191 [2024-12-05 20:49:58.546083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.546119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.191 [2024-12-05 20:49:58.546310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.546344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.546590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.546621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.546895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.546928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.547177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.547209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.547399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.547430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.547677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.547709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.547897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.547934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.548238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.548271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.548410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.548444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.548709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.548741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.548980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.549010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.549137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.549169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.549276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.549307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.549549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.549581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.549785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.549817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.549931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.549963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.550085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.550117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.550256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.550286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.550408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.550438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.550680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.550711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.550900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.550932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.551076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.551114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.551315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.551346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.551517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.551548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.551789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.551820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.552071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.552103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.552290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.552322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.552508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.552540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.552710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.552741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.552844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.552873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.553049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.553090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.553288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.553319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.553504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.553537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.553805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.553837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.553961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.553993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.191 qpair failed and we were unable to recover it. 00:30:05.191 [2024-12-05 20:49:58.554148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.191 [2024-12-05 20:49:58.554182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.554310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.554340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.554467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.554499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.554743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.554774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.554902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.554932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.555055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.555117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.555240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.555272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.555378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.555410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.555592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.555623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.555826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.555859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.556032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.556076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.556193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.556224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.556414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.556446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.556582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.556620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.556809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.556841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.556964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.556995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.557120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.557152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.557318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.557350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.557561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.557592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.557770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.557802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.557911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.557944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.558073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.558107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.558210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.558241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.558409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.558440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.558682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.558715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.558897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.558929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.559173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.559204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.559402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.559433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.559601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.559633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.559755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.559786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.559907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.559938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.560040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.560084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.560318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.560349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.560525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.560556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.560760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.560792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.560966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.560998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.561138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.561171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.561379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.561411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.561599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.192 [2024-12-05 20:49:58.561631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.192 qpair failed and we were unable to recover it. 00:30:05.192 [2024-12-05 20:49:58.561751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.561782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.561898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.561936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.562045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.562087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.562348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.562380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.562500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.562532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.562723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.562755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.562866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.562897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.563138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.563171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.563414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.563446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.563555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.563587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.563705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.563737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.563921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.563953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.564138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.564172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.564313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.564345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.564529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.564560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.564668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.564701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.564894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.564926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.565123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.565154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.565261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.565292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.565411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.565442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.565648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.565680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.565864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.565896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.566101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.566133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.566306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.566338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.566480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.566510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.566614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.566646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.566762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.566795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.567008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.567040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.567300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.567332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.567538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.567570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.567704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.567736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.567839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.567871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.568002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.568033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.568284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.568317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.568441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.568472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.568592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.568624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.568803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.568834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.569005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.569036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.193 [2024-12-05 20:49:58.569297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.193 [2024-12-05 20:49:58.569331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.193 qpair failed and we were unable to recover it. 00:30:05.456 [2024-12-05 20:49:58.569522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.456 [2024-12-05 20:49:58.569554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.456 qpair failed and we were unable to recover it. 00:30:05.456 [2024-12-05 20:49:58.569728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.456 [2024-12-05 20:49:58.569761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.456 qpair failed and we were unable to recover it. 00:30:05.456 [2024-12-05 20:49:58.569956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.456 [2024-12-05 20:49:58.569989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.456 qpair failed and we were unable to recover it. 00:30:05.456 [2024-12-05 20:49:58.570759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.456 [2024-12-05 20:49:58.570899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.456 qpair failed and we were unable to recover it. 00:30:05.456 [2024-12-05 20:49:58.571273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.456 [2024-12-05 20:49:58.571331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.456 qpair failed and we were unable to recover it. 00:30:05.456 [2024-12-05 20:49:58.571467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.456 [2024-12-05 20:49:58.571500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.456 qpair failed and we were unable to recover it. 00:30:05.456 [2024-12-05 20:49:58.571616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.456 [2024-12-05 20:49:58.571648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.456 qpair failed and we were unable to recover it. 00:30:05.456 [2024-12-05 20:49:58.571818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.456 [2024-12-05 20:49:58.571851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.456 qpair failed and we were unable to recover it. 00:30:05.456 [2024-12-05 20:49:58.572033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.456 [2024-12-05 20:49:58.572076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.456 qpair failed and we were unable to recover it. 00:30:05.456 [2024-12-05 20:49:58.572253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.456 [2024-12-05 20:49:58.572284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.456 qpair failed and we were unable to recover it. 00:30:05.456 [2024-12-05 20:49:58.572402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.456 [2024-12-05 20:49:58.572433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.456 qpair failed and we were unable to recover it. 00:30:05.456 [2024-12-05 20:49:58.572624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.456 [2024-12-05 20:49:58.572657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.456 qpair failed and we were unable to recover it. 00:30:05.456 [2024-12-05 20:49:58.572790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.456 [2024-12-05 20:49:58.572821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.456 qpair failed and we were unable to recover it. 00:30:05.456 [2024-12-05 20:49:58.572941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.456 [2024-12-05 20:49:58.572973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.456 qpair failed and we were unable to recover it. 00:30:05.456 [2024-12-05 20:49:58.573176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.456 [2024-12-05 20:49:58.573209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.456 qpair failed and we were unable to recover it. 00:30:05.456 [2024-12-05 20:49:58.573331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.456 [2024-12-05 20:49:58.573362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.456 qpair failed and we were unable to recover it. 00:30:05.456 [2024-12-05 20:49:58.573557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.456 [2024-12-05 20:49:58.573597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.456 qpair failed and we were unable to recover it. 00:30:05.456 [2024-12-05 20:49:58.573711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.456 [2024-12-05 20:49:58.573743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.456 qpair failed and we were unable to recover it. 00:30:05.456 [2024-12-05 20:49:58.573991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.456 [2024-12-05 20:49:58.574023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.456 qpair failed and we were unable to recover it. 00:30:05.456 [2024-12-05 20:49:58.574224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.456 [2024-12-05 20:49:58.574257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.456 qpair failed and we were unable to recover it. 00:30:05.456 [2024-12-05 20:49:58.574364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.456 [2024-12-05 20:49:58.574396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.456 qpair failed and we were unable to recover it. 00:30:05.456 [2024-12-05 20:49:58.574516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.456 [2024-12-05 20:49:58.574548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.456 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.574662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.574694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.574817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.574848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.574979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.575011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.575144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.575176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.575308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.575339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.575471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.575503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.575713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.575744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.575867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.575899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.576017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.576049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.576233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.576264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.576372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.576403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.576597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.576629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.576803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.576836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.576960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.576991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.577181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.577215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.577333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.577365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.577481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.577513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.577631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.577663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.577771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.577803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.577999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.578031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.578228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.578260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.578465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.578502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.578609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.578641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.578760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.578792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.578989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.579021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.579226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.579258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.579476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.579509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.579626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.579658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.579779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.579810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.579917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.579949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.580123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.580157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.580275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.580307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 wit 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.457 h addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.580424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.580456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 [2024-12-05 20:49:58.580571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.580603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.457 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:05.457 [2024-12-05 20:49:58.580870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.457 [2024-12-05 20:49:58.580911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.457 qpair failed and we were unable to recover it. 00:30:05.458 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.458 [2024-12-05 20:49:58.581113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.581149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.581266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.581301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.458 [2024-12-05 20:49:58.581424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.581457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.581579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.581612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.581729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.581761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.581882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.581914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.582041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.582084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.582301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.582333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.582453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.582484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.582604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.582635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.582814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.582846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c04000b90 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.582982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.583017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.583210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.583243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.583363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.583394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.583531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.583562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.583711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.583742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.583924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.583955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.584072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.584106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.584212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.584244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.584365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.584397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.584518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.584550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.584660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.584691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.584957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.584989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.585113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.585145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.585319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.585351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.585549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.585582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.585783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.585815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.585989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.586021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.586150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.586183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.586398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.586430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.586538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.586569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.586691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.586723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.586840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.586872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.586972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.587002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.587216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.587249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.458 qpair failed and we were unable to recover it. 00:30:05.458 [2024-12-05 20:49:58.587424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.458 [2024-12-05 20:49:58.587455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.587643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.587675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.587797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.587828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.588007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.588044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.588160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.588193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.588390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.588421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.588596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.588628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.588737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.588769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.588970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.589001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.589120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.589154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.589273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.589305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.589560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.589592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.589704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.589736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.589922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.589954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.590149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.590182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.590288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.590319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.590579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.590610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.590736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.590768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.590945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.590976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.591085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.591117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.591288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.591320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.591571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.591603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.591716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.591748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.591922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.591954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.592127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.592160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.592277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.592308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.592428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.592460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.592574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.592606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.592711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.592742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.592915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.592947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.593126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.593163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.593338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.593370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.593494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.593526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.593703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.593734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.593928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.593960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.594080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.594112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.459 [2024-12-05 20:49:58.594219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.459 [2024-12-05 20:49:58.594250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.459 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.594448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.594479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.594586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.594618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.594805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.594837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.595031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.595086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.595302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.595334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.595449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.595481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.595599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.595630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.595821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.595853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.595972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.596004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.596205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.596237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.596425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.596456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.596627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.596659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.596783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.596815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.597028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.597069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.597262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.597294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.597419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.597449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.597618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.597649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.597773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.597805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.597924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.597956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.598201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.598233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.598335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.598373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.598643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.598674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.598785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.598818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.599070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.599104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.599375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.599407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.599528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.599559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.599665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.599696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.599903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.599935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.600102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.600135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.600308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.600340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.600535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.600566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.600677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.600709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.600924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.600956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.601074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.601107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.601250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.601282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.601391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.460 [2024-12-05 20:49:58.601422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.460 qpair failed and we were unable to recover it. 00:30:05.460 [2024-12-05 20:49:58.601550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.461 [2024-12-05 20:49:58.601582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.461 qpair failed and we were unable to recover it. 00:30:05.461 [2024-12-05 20:49:58.601881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.461 [2024-12-05 20:49:58.601913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.461 qpair failed and we were unable to recover it. 00:30:05.461 [2024-12-05 20:49:58.602154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.461 [2024-12-05 20:49:58.602187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.461 qpair failed and we were unable to recover it. 00:30:05.461 [2024-12-05 20:49:58.602311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.461 [2024-12-05 20:49:58.602343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.461 qpair failed and we were unable to recover it. 00:30:05.461 [2024-12-05 20:49:58.602463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.461 [2024-12-05 20:49:58.602495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.461 qpair failed and we were unable to recover it. 00:30:05.461 [2024-12-05 20:49:58.602761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.461 [2024-12-05 20:49:58.602793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.461 qpair failed and we were unable to recover it. 00:30:05.461 [2024-12-05 20:49:58.602908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.461 [2024-12-05 20:49:58.602940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.461 qpair failed and we were unable to recover it. 00:30:05.461 [2024-12-05 20:49:58.603121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.461 [2024-12-05 20:49:58.603155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.461 qpair failed and we were unable to recover it. 00:30:05.461 [2024-12-05 20:49:58.603257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.461 [2024-12-05 20:49:58.603288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.461 qpair failed and we were unable to recover it. 00:30:05.461 [2024-12-05 20:49:58.603534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.461 [2024-12-05 20:49:58.603566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.461 qpair failed and we were unable to recover it. 00:30:05.461 [2024-12-05 20:49:58.603671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.461 [2024-12-05 20:49:58.603703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.461 qpair failed and we were unable to recover it. 00:30:05.461 [2024-12-05 20:49:58.603822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.461 [2024-12-05 20:49:58.603854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.461 qpair failed and we were unable to recover it. 00:30:05.461 [2024-12-05 20:49:58.603986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.461 [2024-12-05 20:49:58.604019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.461 qpair failed and we were unable to recover it. 00:30:05.461 [2024-12-05 20:49:58.604226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.461 [2024-12-05 20:49:58.604260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.461 qpair failed and we were unable to recover it. 00:30:05.461 [2024-12-05 20:49:58.604388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.461 [2024-12-05 20:49:58.604420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.461 qpair failed and we were unable to recover it. 00:30:05.461 [2024-12-05 20:49:58.604689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.461 [2024-12-05 20:49:58.604721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.461 qpair failed and we were unable to recover it. 00:30:05.461 [2024-12-05 20:49:58.604983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.461 [2024-12-05 20:49:58.605014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.461 qpair failed and we were unable to recover it. 00:30:05.461 [2024-12-05 20:49:58.605220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.461 [2024-12-05 20:49:58.605252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.461 qpair failed and we were unable to recover it. 00:30:05.461 [2024-12-05 20:49:58.605369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.461 [2024-12-05 20:49:58.605401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.461 qpair failed and we were unable to recover it. 00:30:05.461 [2024-12-05 20:49:58.605518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.461 [2024-12-05 20:49:58.605550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.461 qpair failed and we were unable to recover it. 00:30:05.461 [2024-12-05 20:49:58.605738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.461 [2024-12-05 20:49:58.605770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.461 qpair failed and we were unable to recover it. 00:30:05.461 [2024-12-05 20:49:58.605969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.461 [2024-12-05 20:49:58.606002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.461 qpair failed and we were unable to recover it. 00:30:05.461 [2024-12-05 20:49:58.606180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.461 [2024-12-05 20:49:58.606214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.461 qpair failed and we were unable to recover it. 00:30:05.461 [2024-12-05 20:49:58.606391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.461 [2024-12-05 20:49:58.606423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.461 qpair failed and we were unable to recover it. 00:30:05.461 [2024-12-05 20:49:58.606735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.461 [2024-12-05 20:49:58.606767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f590 with addr=10.0.0.2, port=4420 00:30:05.461 qpair failed and we were unable to recover it. 00:30:05.461 A controller has encountered a failure and is being reset. 00:30:05.461 [2024-12-05 20:49:58.606925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.461 [2024-12-05 20:49:58.606968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.461 qpair failed and we were unable to recover it. 00:30:05.461 [2024-12-05 20:49:58.607110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.461 [2024-12-05 20:49:58.607143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.461 qpair failed and we were unable to recover it. 00:30:05.461 [2024-12-05 20:49:58.607319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.461 [2024-12-05 20:49:58.607351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.461 qpair failed and we were unable to recover it. 00:30:05.461 [2024-12-05 20:49:58.607639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.461 [2024-12-05 20:49:58.607671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.461 qpair failed and we were unable to recover it. 00:30:05.461 [2024-12-05 20:49:58.607926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.461 [2024-12-05 20:49:58.607958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.461 qpair failed and we were unable to recover it. 00:30:05.461 [2024-12-05 20:49:58.608175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.461 [2024-12-05 20:49:58.608210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.461 qpair failed and we were unable to recover it. 00:30:05.461 [2024-12-05 20:49:58.608328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.608360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 [2024-12-05 20:49:58.608546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.608579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 [2024-12-05 20:49:58.608763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.608795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 [2024-12-05 20:49:58.608969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.609001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 [2024-12-05 20:49:58.609116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.609149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 [2024-12-05 20:49:58.609329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.609361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 [2024-12-05 20:49:58.609614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.609646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 [2024-12-05 20:49:58.609854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.609903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 [2024-12-05 20:49:58.610156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.610190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 [2024-12-05 20:49:58.610307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.610337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 [2024-12-05 20:49:58.610458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.610491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 [2024-12-05 20:49:58.610736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.610769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 [2024-12-05 20:49:58.610970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.611002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 [2024-12-05 20:49:58.611259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.611292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 [2024-12-05 20:49:58.611408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.611441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 [2024-12-05 20:49:58.611631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.611663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 [2024-12-05 20:49:58.611856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.611889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 [2024-12-05 20:49:58.612095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.612129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 [2024-12-05 20:49:58.612251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.612283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 [2024-12-05 20:49:58.612483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.612514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 [2024-12-05 20:49:58.612611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.612643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 [2024-12-05 20:49:58.612764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.612797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 [2024-12-05 20:49:58.613092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.613125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 [2024-12-05 20:49:58.613343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.613375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 [2024-12-05 20:49:58.613642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.613673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 [2024-12-05 20:49:58.613864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.613896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 [2024-12-05 20:49:58.614097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.614131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 [2024-12-05 20:49:58.614234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.614266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 Malloc0 00:30:05.462 [2024-12-05 20:49:58.614435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.614466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 [2024-12-05 20:49:58.614571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.614602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 [2024-12-05 20:49:58.614843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.614874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 [2024-12-05 20:49:58.614991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.615022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.462 [2024-12-05 20:49:58.615155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.615188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.462 qpair failed and we were unable to recover it. 00:30:05.462 [2024-12-05 20:49:58.615458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.462 [2024-12-05 20:49:58.615490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.615798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.615831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.616004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.616035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.463 [2024-12-05 20:49:58.616156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.616189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.616419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.616450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.616651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.616682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.616957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.616989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.617182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.617215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.617430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.617461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.617731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.617763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.617948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.617981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.618153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.618186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.618453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.618484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.618667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.618699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.618884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.618915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.619047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.619104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.619378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.619410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.619602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.619633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.619901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.619932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.620130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.620162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.620341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.620371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.620638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.620669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.620838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.620870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.621066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.621098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.621225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.621257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.621445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.621476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.621666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.621698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.621881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.621913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.622118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.622121] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.463 [2024-12-05 20:49:58.622150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.622271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.622302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.622420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.622451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.622692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.622724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.622966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.622998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.623187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.623219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.463 qpair failed and we were unable to recover it. 00:30:05.463 [2024-12-05 20:49:58.623506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.463 [2024-12-05 20:49:58.623538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.464 qpair failed and we were unable to recover it. 00:30:05.464 [2024-12-05 20:49:58.623717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.464 [2024-12-05 20:49:58.623748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.464 qpair failed and we were unable to recover it. 00:30:05.464 [2024-12-05 20:49:58.623951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.464 [2024-12-05 20:49:58.623982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.464 qpair failed and we were unable to recover it. 00:30:05.464 [2024-12-05 20:49:58.624155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.464 [2024-12-05 20:49:58.624188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.464 qpair failed and we were unable to recover it. 00:30:05.464 [2024-12-05 20:49:58.624457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.464 [2024-12-05 20:49:58.624488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.464 qpair failed and we were unable to recover it. 00:30:05.464 [2024-12-05 20:49:58.624625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.464 [2024-12-05 20:49:58.624657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.464 qpair failed and we were unable to recover it. 00:30:05.464 [2024-12-05 20:49:58.624831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.464 [2024-12-05 20:49:58.624863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.464 qpair failed and we were unable to recover it. 00:30:05.464 [2024-12-05 20:49:58.625086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.464 [2024-12-05 20:49:58.625117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.464 qpair failed and we were unable to recover it. 00:30:05.464 [2024-12-05 20:49:58.625301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.464 [2024-12-05 20:49:58.625333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.464 qpair failed and we were unable to recover it. 00:30:05.464 [2024-12-05 20:49:58.625444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.464 [2024-12-05 20:49:58.625476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.464 qpair failed and we were unable to recover it. 00:30:05.464 [2024-12-05 20:49:58.625666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.464 [2024-12-05 20:49:58.625698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.464 qpair failed and we were unable to recover it. 00:30:05.464 [2024-12-05 20:49:58.625810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.464 [2024-12-05 20:49:58.625841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.464 qpair failed and we were unable to recover it. 00:30:05.464 [2024-12-05 20:49:58.626102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.464 [2024-12-05 20:49:58.626135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.464 qpair failed and we were unable to recover it. 00:30:05.464 [2024-12-05 20:49:58.626381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.464 [2024-12-05 20:49:58.626414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.464 qpair failed and we were unable to recover it. 00:30:05.464 [2024-12-05 20:49:58.626603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.464 [2024-12-05 20:49:58.626634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.464 qpair failed and we were unable to recover it. 00:30:05.464 [2024-12-05 20:49:58.626766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.464 [2024-12-05 20:49:58.626799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.464 qpair failed and we were unable to recover it. 00:30:05.464 [2024-12-05 20:49:58.626970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.464 [2024-12-05 20:49:58.627002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.464 qpair failed and we were unable to recover it. 00:30:05.464 [2024-12-05 20:49:58.627125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.464 [2024-12-05 20:49:58.627158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.464 qpair failed and we were unable to recover it. 00:30:05.464 [2024-12-05 20:49:58.627274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.464 [2024-12-05 20:49:58.627312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.464 qpair failed and we were unable to recover it. 00:30:05.464 [2024-12-05 20:49:58.627537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.464 [2024-12-05 20:49:58.627569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.464 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.464 qpair failed and we were unable to recover it. 00:30:05.464 [2024-12-05 20:49:58.627756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.464 [2024-12-05 20:49:58.627788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.464 qpair failed and we were unable to recover it. 00:30:05.464 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:05.464 [2024-12-05 20:49:58.627981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.464 [2024-12-05 20:49:58.628013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.464 qpair failed and we were unable to recover it. 00:30:05.464 [2024-12-05 20:49:58.628263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.464 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.464 [2024-12-05 20:49:58.628297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.464 qpair failed and we were unable to recover it. 00:30:05.464 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.464 [2024-12-05 20:49:58.628619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.464 [2024-12-05 20:49:58.628651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.464 qpair failed and we were unable to recover it. 00:30:05.464 [2024-12-05 20:49:58.628893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.464 [2024-12-05 20:49:58.628924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.464 qpair failed and we were unable to recover it. 00:30:05.464 [2024-12-05 20:49:58.629203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.464 [2024-12-05 20:49:58.629236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.464 qpair failed and we were unable to recover it. 00:30:05.464 [2024-12-05 20:49:58.629503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.464 [2024-12-05 20:49:58.629534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.464 qpair failed and we were unable to recover it. 00:30:05.464 [2024-12-05 20:49:58.629767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.464 [2024-12-05 20:49:58.629799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.464 qpair failed and we were unable to recover it. 00:30:05.464 [2024-12-05 20:49:58.629991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.464 [2024-12-05 20:49:58.630022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.464 qpair failed and we were unable to recover it. 00:30:05.464 [2024-12-05 20:49:58.630220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.464 [2024-12-05 20:49:58.630252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.465 qpair failed and we were unable to recover it. 00:30:05.465 [2024-12-05 20:49:58.630428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.465 [2024-12-05 20:49:58.630461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.465 qpair failed and we were unable to recover it. 00:30:05.465 [2024-12-05 20:49:58.630576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.465 [2024-12-05 20:49:58.630608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.465 qpair failed and we were unable to recover it. 00:30:05.465 [2024-12-05 20:49:58.630864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.465 [2024-12-05 20:49:58.630895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.465 qpair failed and we were unable to recover it. 00:30:05.465 [2024-12-05 20:49:58.631017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.465 [2024-12-05 20:49:58.631049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.465 qpair failed and we were unable to recover it. 00:30:05.465 [2024-12-05 20:49:58.631373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.465 [2024-12-05 20:49:58.631405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.465 qpair failed and we were unable to recover it. 00:30:05.465 [2024-12-05 20:49:58.631623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.465 [2024-12-05 20:49:58.631655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.465 qpair failed and we were unable to recover it. 00:30:05.465 [2024-12-05 20:49:58.631765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.465 [2024-12-05 20:49:58.631795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.465 qpair failed and we were unable to recover it. 00:30:05.465 [2024-12-05 20:49:58.631960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.465 [2024-12-05 20:49:58.631992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.465 qpair failed and we were unable to recover it. 00:30:05.465 [2024-12-05 20:49:58.632166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.465 [2024-12-05 20:49:58.632200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.465 qpair failed and we were unable to recover it. 00:30:05.465 [2024-12-05 20:49:58.632317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.465 [2024-12-05 20:49:58.632349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.465 qpair failed and we were unable to recover it. 00:30:05.465 [2024-12-05 20:49:58.632614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.465 [2024-12-05 20:49:58.632646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.465 qpair failed and we were unable to recover it. 00:30:05.465 [2024-12-05 20:49:58.632858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.465 [2024-12-05 20:49:58.632890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.465 qpair failed and we were unable to recover it. 00:30:05.465 [2024-12-05 20:49:58.633105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.465 [2024-12-05 20:49:58.633138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c10000b90 with addr=10.0.0.2, port=4420 00:30:05.465 qpair failed and we were unable to recover it. 00:30:05.465 [2024-12-05 20:49:58.633332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.465 [2024-12-05 20:49:58.633389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ad540 with addr=10.0.0.2, port=4420 00:30:05.465 [2024-12-05 20:49:58.633415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ad540 is same with the state(6) to be set 00:30:05.465 [2024-12-05 20:49:58.633454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ad540 (9): Bad file descriptor 00:30:05.465 [2024-12-05 20:49:58.633482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:30:05.465 [2024-12-05 20:49:58.633502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:30:05.465 [2024-12-05 20:49:58.633531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:05.465 Unable to reset the controller. 00:30:05.465 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.465 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:05.465 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.465 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.465 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.465 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:05.465 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.465 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.465 [2024-12-05 20:49:58.647069] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.465 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.465 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:05.465 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.465 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.465 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.465 20:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 535879 00:30:06.399 Controller properly reset. 00:30:11.666 Initializing NVMe Controllers 00:30:11.666 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:11.666 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:11.666 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:11.666 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:11.666 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:11.666 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:11.666 Initialization complete. Launching workers. 00:30:11.666 Starting thread on core 1 00:30:11.666 Starting thread on core 2 00:30:11.666 Starting thread on core 3 00:30:11.666 Starting thread on core 0 00:30:11.666 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:11.666 00:30:11.666 real 0m10.647s 00:30:11.666 user 0m34.841s 00:30:11.666 sys 0m5.895s 00:30:11.666 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:11.666 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:11.666 ************************************ 00:30:11.666 END TEST nvmf_target_disconnect_tc2 00:30:11.666 ************************************ 00:30:11.666 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:11.666 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:11.666 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:11.666 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:11.666 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:30:11.666 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:11.666 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:30:11.666 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:11.666 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:11.666 rmmod nvme_tcp 00:30:11.666 rmmod nvme_fabrics 00:30:11.666 rmmod nvme_keyring 00:30:11.666 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:11.666 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:30:11.666 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:30:11.666 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 536547 ']' 00:30:11.666 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 536547 00:30:11.666 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 536547 ']' 00:30:11.666 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 536547 00:30:11.666 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:30:11.666 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:11.666 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 536547 00:30:11.666 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:30:11.666 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:30:11.666 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 536547' 00:30:11.666 killing process with pid 536547 00:30:11.666 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 536547 00:30:11.666 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 536547 00:30:11.666 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:11.666 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:11.666 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:11.667 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:30:11.667 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:30:11.667 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:11.667 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:30:11.667 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:11.667 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:11.667 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.667 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:11.667 20:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.573 20:50:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:13.573 00:30:13.573 real 0m19.448s 00:30:13.573 user 1m1.800s 00:30:13.573 sys 0m11.079s 00:30:13.573 20:50:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:13.573 20:50:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:13.573 ************************************ 00:30:13.573 END TEST nvmf_target_disconnect 00:30:13.573 ************************************ 00:30:13.573 20:50:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:13.573 00:30:13.573 real 5m57.520s 00:30:13.573 user 11m2.428s 00:30:13.573 sys 1m59.953s 00:30:13.573 20:50:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:13.573 20:50:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.573 ************************************ 00:30:13.573 END TEST nvmf_host 00:30:13.573 ************************************ 00:30:13.573 20:50:06 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:30:13.573 20:50:06 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:30:13.573 20:50:06 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:13.573 20:50:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:13.573 20:50:06 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:13.573 20:50:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:13.833 ************************************ 00:30:13.833 START TEST nvmf_target_core_interrupt_mode 00:30:13.833 ************************************ 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:13.833 * Looking for test storage... 00:30:13.833 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:13.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.833 --rc genhtml_branch_coverage=1 00:30:13.833 --rc genhtml_function_coverage=1 00:30:13.833 --rc genhtml_legend=1 00:30:13.833 --rc geninfo_all_blocks=1 00:30:13.833 --rc geninfo_unexecuted_blocks=1 00:30:13.833 00:30:13.833 ' 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:13.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.833 --rc genhtml_branch_coverage=1 00:30:13.833 --rc genhtml_function_coverage=1 00:30:13.833 --rc genhtml_legend=1 00:30:13.833 --rc geninfo_all_blocks=1 00:30:13.833 --rc geninfo_unexecuted_blocks=1 00:30:13.833 00:30:13.833 ' 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:13.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.833 --rc genhtml_branch_coverage=1 00:30:13.833 --rc genhtml_function_coverage=1 00:30:13.833 --rc genhtml_legend=1 00:30:13.833 --rc geninfo_all_blocks=1 00:30:13.833 --rc geninfo_unexecuted_blocks=1 00:30:13.833 00:30:13.833 ' 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:13.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.833 --rc genhtml_branch_coverage=1 00:30:13.833 --rc genhtml_function_coverage=1 00:30:13.833 --rc genhtml_legend=1 00:30:13.833 --rc geninfo_all_blocks=1 00:30:13.833 --rc geninfo_unexecuted_blocks=1 00:30:13.833 00:30:13.833 ' 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:13.833 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:13.834 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:13.834 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:13.834 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:30:13.834 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.834 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.834 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.834 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.834 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.834 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.834 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:30:13.834 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.834 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:30:13.834 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:13.834 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:13.834 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:13.834 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:13.834 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:13.834 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:13.834 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:13.834 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:13.834 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:13.834 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:13.834 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:13.834 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:30:13.834 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:30:13.834 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:13.834 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:13.834 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:13.834 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:14.094 ************************************ 00:30:14.094 START TEST nvmf_abort 00:30:14.094 ************************************ 00:30:14.094 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:14.094 * Looking for test storage... 00:30:14.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:14.094 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:14.094 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:30:14.094 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:14.094 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:14.094 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:14.094 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:14.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.095 --rc genhtml_branch_coverage=1 00:30:14.095 --rc genhtml_function_coverage=1 00:30:14.095 --rc genhtml_legend=1 00:30:14.095 --rc geninfo_all_blocks=1 00:30:14.095 --rc geninfo_unexecuted_blocks=1 00:30:14.095 00:30:14.095 ' 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:14.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.095 --rc genhtml_branch_coverage=1 00:30:14.095 --rc genhtml_function_coverage=1 00:30:14.095 --rc genhtml_legend=1 00:30:14.095 --rc geninfo_all_blocks=1 00:30:14.095 --rc geninfo_unexecuted_blocks=1 00:30:14.095 00:30:14.095 ' 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:14.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.095 --rc genhtml_branch_coverage=1 00:30:14.095 --rc genhtml_function_coverage=1 00:30:14.095 --rc genhtml_legend=1 00:30:14.095 --rc geninfo_all_blocks=1 00:30:14.095 --rc geninfo_unexecuted_blocks=1 00:30:14.095 00:30:14.095 ' 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:14.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.095 --rc genhtml_branch_coverage=1 00:30:14.095 --rc genhtml_function_coverage=1 00:30:14.095 --rc genhtml_legend=1 00:30:14.095 --rc geninfo_all_blocks=1 00:30:14.095 --rc geninfo_unexecuted_blocks=1 00:30:14.095 00:30:14.095 ' 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:14.095 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:14.096 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:14.096 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:14.096 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:14.096 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:14.096 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:14.096 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:14.096 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:14.096 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:14.096 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:30:14.096 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:30:14.096 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:14.096 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:14.096 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:14.096 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:14.096 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:14.096 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.096 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:14.096 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.096 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:14.096 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:14.096 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:30:14.096 20:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:20.665 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:20.665 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:20.665 Found net devices under 0000:af:00.0: cvl_0_0 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:20.665 Found net devices under 0000:af:00.1: cvl_0_1 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:20.665 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:20.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:20.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:30:20.666 00:30:20.666 --- 10.0.0.2 ping statistics --- 00:30:20.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:20.666 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:20.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:20.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:30:20.666 00:30:20.666 --- 10.0.0.1 ping statistics --- 00:30:20.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:20.666 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=541329 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 541329 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 541329 ']' 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:20.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:20.666 20:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:20.666 [2024-12-05 20:50:13.444944] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:20.666 [2024-12-05 20:50:13.445862] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:30:20.666 [2024-12-05 20:50:13.445900] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:20.666 [2024-12-05 20:50:13.523668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:20.666 [2024-12-05 20:50:13.561854] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:20.666 [2024-12-05 20:50:13.561890] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:20.666 [2024-12-05 20:50:13.561896] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:20.666 [2024-12-05 20:50:13.561902] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:20.666 [2024-12-05 20:50:13.561906] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:20.666 [2024-12-05 20:50:13.563338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:20.666 [2024-12-05 20:50:13.563454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:20.666 [2024-12-05 20:50:13.563455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:20.666 [2024-12-05 20:50:13.630539] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:20.666 [2024-12-05 20:50:13.631233] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:20.666 [2024-12-05 20:50:13.631329] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:20.666 [2024-12-05 20:50:13.631504] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:20.924 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:20.924 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:30:20.924 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:20.924 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:20.924 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:20.925 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:20.925 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:30:20.925 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.925 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:20.925 [2024-12-05 20:50:14.304221] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:20.925 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.925 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:30:20.925 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.925 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:20.925 Malloc0 00:30:20.925 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.925 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:20.925 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.925 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:20.925 Delay0 00:30:21.183 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.183 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:21.183 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.183 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:21.183 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.183 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:30:21.183 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.183 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:21.183 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.183 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:21.183 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.183 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:21.183 [2024-12-05 20:50:14.392126] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:21.183 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.183 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:21.183 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.183 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:21.183 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.183 20:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:30:21.183 [2024-12-05 20:50:14.523757] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:23.710 Initializing NVMe Controllers 00:30:23.710 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:23.710 controller IO queue size 128 less than required 00:30:23.710 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:30:23.710 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:23.710 Initialization complete. Launching workers. 00:30:23.710 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 41425 00:30:23.710 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 41482, failed to submit 66 00:30:23.710 success 41425, unsuccessful 57, failed 0 00:30:23.710 20:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:23.710 20:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.710 20:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:23.710 20:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.710 20:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:30:23.710 20:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:30:23.710 20:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:23.710 20:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:30:23.710 20:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:23.710 20:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:30:23.710 20:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:23.710 20:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:23.710 rmmod nvme_tcp 00:30:23.710 rmmod nvme_fabrics 00:30:23.710 rmmod nvme_keyring 00:30:23.710 20:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:23.710 20:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:30:23.710 20:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:30:23.710 20:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 541329 ']' 00:30:23.710 20:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 541329 00:30:23.710 20:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 541329 ']' 00:30:23.710 20:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 541329 00:30:23.710 20:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:30:23.710 20:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:23.710 20:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 541329 00:30:23.710 20:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:23.710 20:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:23.710 20:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 541329' 00:30:23.710 killing process with pid 541329 00:30:23.710 20:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 541329 00:30:23.710 20:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 541329 00:30:23.710 20:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:23.710 20:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:23.710 20:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:23.710 20:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:30:23.710 20:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:30:23.710 20:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:23.710 20:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:30:23.710 20:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:23.710 20:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:23.710 20:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.710 20:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:23.710 20:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:26.249 00:30:26.249 real 0m11.810s 00:30:26.249 user 0m10.812s 00:30:26.249 sys 0m5.703s 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:26.249 ************************************ 00:30:26.249 END TEST nvmf_abort 00:30:26.249 ************************************ 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:26.249 ************************************ 00:30:26.249 START TEST nvmf_ns_hotplug_stress 00:30:26.249 ************************************ 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:26.249 * Looking for test storage... 00:30:26.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:26.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.249 --rc genhtml_branch_coverage=1 00:30:26.249 --rc genhtml_function_coverage=1 00:30:26.249 --rc genhtml_legend=1 00:30:26.249 --rc geninfo_all_blocks=1 00:30:26.249 --rc geninfo_unexecuted_blocks=1 00:30:26.249 00:30:26.249 ' 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:26.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.249 --rc genhtml_branch_coverage=1 00:30:26.249 --rc genhtml_function_coverage=1 00:30:26.249 --rc genhtml_legend=1 00:30:26.249 --rc geninfo_all_blocks=1 00:30:26.249 --rc geninfo_unexecuted_blocks=1 00:30:26.249 00:30:26.249 ' 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:26.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.249 --rc genhtml_branch_coverage=1 00:30:26.249 --rc genhtml_function_coverage=1 00:30:26.249 --rc genhtml_legend=1 00:30:26.249 --rc geninfo_all_blocks=1 00:30:26.249 --rc geninfo_unexecuted_blocks=1 00:30:26.249 00:30:26.249 ' 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:26.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.249 --rc genhtml_branch_coverage=1 00:30:26.249 --rc genhtml_function_coverage=1 00:30:26.249 --rc genhtml_legend=1 00:30:26.249 --rc geninfo_all_blocks=1 00:30:26.249 --rc geninfo_unexecuted_blocks=1 00:30:26.249 00:30:26.249 ' 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:26.249 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:30:26.250 20:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:32.822 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:32.822 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:32.822 Found net devices under 0000:af:00.0: cvl_0_0 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.822 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:32.823 Found net devices under 0000:af:00.1: cvl_0_1 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:32.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:32.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:30:32.823 00:30:32.823 --- 10.0.0.2 ping statistics --- 00:30:32.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.823 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:32.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:32.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:30:32.823 00:30:32.823 --- 10.0.0.1 ping statistics --- 00:30:32.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.823 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=545552 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 545552 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 545552 ']' 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:32.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:32.823 [2024-12-05 20:50:25.381890] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:32.823 [2024-12-05 20:50:25.382793] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:30:32.823 [2024-12-05 20:50:25.382824] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:32.823 [2024-12-05 20:50:25.457806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:32.823 [2024-12-05 20:50:25.495892] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:32.823 [2024-12-05 20:50:25.495926] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:32.823 [2024-12-05 20:50:25.495932] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:32.823 [2024-12-05 20:50:25.495937] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:32.823 [2024-12-05 20:50:25.495942] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:32.823 [2024-12-05 20:50:25.497237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:32.823 [2024-12-05 20:50:25.497348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:32.823 [2024-12-05 20:50:25.497350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:32.823 [2024-12-05 20:50:25.563050] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:32.823 [2024-12-05 20:50:25.563837] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:32.823 [2024-12-05 20:50:25.563952] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:32.823 [2024-12-05 20:50:25.564113] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:32.823 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:32.824 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:32.824 [2024-12-05 20:50:25.786001] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:32.824 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:32.824 20:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:32.824 [2024-12-05 20:50:26.142438] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:32.824 20:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:33.082 20:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:33.082 Malloc0 00:30:33.342 20:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:33.342 Delay0 00:30:33.342 20:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:33.601 20:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:33.860 NULL1 00:30:33.860 20:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:33.860 20:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=545923 00:30:33.860 20:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:33.860 20:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:30:33.860 20:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:34.119 20:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:34.378 20:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:34.378 20:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:34.378 true 00:30:34.637 20:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:30:34.637 20:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:34.637 20:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:34.895 20:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:34.896 20:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:35.155 true 00:30:35.155 20:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:30:35.155 20:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:36.093 Read completed with error (sct=0, sc=11) 00:30:36.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:36.352 20:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:36.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:36.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:36.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:36.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:36.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:36.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:36.352 20:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:36.353 20:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:36.612 true 00:30:36.612 20:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:30:36.612 20:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.549 20:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:37.550 20:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:37.550 20:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:37.809 true 00:30:37.809 20:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:30:37.809 20:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:38.067 20:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:38.326 20:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:38.326 20:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:38.326 true 00:30:38.326 20:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:30:38.326 20:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:39.704 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:39.704 20:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:39.704 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:39.704 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:39.704 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:39.704 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:39.704 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:39.704 20:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:30:39.704 20:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:30:39.962 true 00:30:39.962 20:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:30:39.962 20:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.896 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:40.896 20:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:40.896 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:40.896 20:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:30:40.896 20:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:30:41.155 true 00:30:41.155 20:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:30:41.155 20:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:41.412 20:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:41.674 20:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:30:41.674 20:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:30:41.674 true 00:30:41.674 20:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:30:41.674 20:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:43.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:43.048 20:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:43.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:43.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:43.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:43.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:43.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:43.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:43.306 20:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:30:43.306 20:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:30:43.306 true 00:30:43.306 20:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:30:43.306 20:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:44.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:44.242 20:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:44.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:44.501 20:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:30:44.502 20:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:30:44.502 true 00:30:44.502 20:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:30:44.502 20:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:44.761 20:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:45.019 20:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:30:45.019 20:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:30:45.019 true 00:30:45.278 20:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:30:45.278 20:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:46.218 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:46.218 20:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:46.477 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:46.477 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:46.477 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:46.477 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:46.477 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:46.477 20:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:30:46.477 20:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:30:46.736 true 00:30:46.736 20:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:30:46.736 20:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:47.671 20:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:47.671 20:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:30:47.671 20:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:30:47.930 true 00:30:47.930 20:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:30:47.930 20:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:48.189 20:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:48.189 20:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:30:48.189 20:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:30:48.447 true 00:30:48.447 20:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:30:48.447 20:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:49.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:49.822 20:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:49.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:49.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:49.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:49.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:49.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:49.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:49.822 20:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:30:49.823 20:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:30:50.082 true 00:30:50.082 20:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:30:50.082 20:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:51.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:51.018 20:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:51.018 20:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:30:51.018 20:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:30:51.275 true 00:30:51.275 20:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:30:51.275 20:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:51.275 20:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:51.534 20:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:30:51.534 20:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:30:51.793 true 00:30:51.793 20:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:30:51.793 20:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:51.793 20:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:52.050 20:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:30:52.050 20:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:30:52.308 true 00:30:52.308 20:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:30:52.308 20:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:52.566 20:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:52.566 20:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:30:52.566 20:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:30:52.825 true 00:30:52.825 20:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:30:52.825 20:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:54.202 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:54.202 20:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:54.202 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:54.202 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:54.202 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:54.202 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:54.202 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:54.202 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:54.202 20:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:30:54.202 20:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:30:54.202 true 00:30:54.461 20:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:30:54.461 20:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:55.030 20:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:55.289 20:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:30:55.289 20:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:30:55.548 true 00:30:55.548 20:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:30:55.548 20:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:55.807 20:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:55.807 20:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:30:55.807 20:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:30:56.065 true 00:30:56.065 20:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:30:56.065 20:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:56.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:56.323 20:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:56.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:56.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:56.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:56.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:56.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:56.583 20:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:30:56.583 20:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:30:56.583 true 00:30:56.583 20:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:30:56.583 20:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:57.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:57.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:57.519 20:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:57.778 20:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:30:57.778 20:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:30:57.778 true 00:30:57.778 20:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:30:57.778 20:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:58.055 20:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.377 20:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:58.377 20:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:58.377 true 00:30:58.377 20:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:30:58.377 20:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:58.649 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.649 20:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.649 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.649 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.649 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.649 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.944 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.944 [2024-12-05 20:50:52.119887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.119945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.119975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.120012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.120042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.120083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.120120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.120160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.120195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.120231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.120265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.120303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.120337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.120370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.120401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.120445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.120478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.120513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.120547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.120581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.120610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.120646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.120683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.120716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.120752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.120788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.120824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.120860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.120896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.120932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.120964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.120999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.121034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.121073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.121113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.121149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.121188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.121228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.121266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.121311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.121349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.121386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.121428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.944 [2024-12-05 20:50:52.121470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.121514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.121556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.121597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.121641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.121685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.121726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.121772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.121808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.121837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.121877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.121911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.121946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.121983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.122018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.122050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.122089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.122122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.122159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.122194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.122232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:58.945 [2024-12-05 20:50:52.123064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.123115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.123159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.123202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.123249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.123291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.123330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.123373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.123418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.123456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.123493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.123537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.123580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.123617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.123648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.123679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.123718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.123754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.123792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.123828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.123865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.123914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.123950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.123988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.124029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.124070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.124114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.124153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.124196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.124245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.124288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.124331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.124369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.124407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.124465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.124505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.124547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.124594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.124634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.124676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.124715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.124752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.124789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.124826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.124867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.124904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.124946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.124982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.125020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.125081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.125117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.125157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.125194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.125234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.125285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.125326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.125376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.125414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.125459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.125495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.125538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.125576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.125614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.125655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.125827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.125870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.125909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.125954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.945 [2024-12-05 20:50:52.125991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.126032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.126078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.126115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.126156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.126194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.126236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.126282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.126326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.126363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.126398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.126434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.126473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.126513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.126552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.126587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.126614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.126646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.126681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.126715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.126752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.126786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.126820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.126854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.126890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.126925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.126960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.126994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.127032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.127062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.127095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.127132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.127174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.127215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.127253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.127290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.127332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.127375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.127417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.127453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.127487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.127521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.127548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.127586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.127622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.127656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.127690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.127725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.127761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.127796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.127830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.127866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.127903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.127936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.127971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.128004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.128039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.128074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.128108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.128551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.128592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.128629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.128658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.128691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.128727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.128754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.128787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.128820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.128859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.128907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.128940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.128976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.129012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.129045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.129089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.129122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.129161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.129197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.129234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.129274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.129309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.129342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.129373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.946 [2024-12-05 20:50:52.129400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.129432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.129471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.129507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.129550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.129590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.129628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.129671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.129715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.129760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.129805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.129846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.129882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.129916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.129950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.129984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.130019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.130055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.130094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.130132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.130158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.130190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.130224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.130264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.130299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.130337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.130378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.130427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.130467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.130511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.130568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.130608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.130651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.130690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.130729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.130773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.130814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.130858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.130894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.130938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.131679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.131724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.131762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.131817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.131854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.131897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.131930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.131971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.132005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.132044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.132088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.132126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.132172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.132207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.132248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.132290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.132327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.132368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.132407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.132448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.132484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.132527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.132571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.132609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.132646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.132685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.132724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.132757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.132794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.132828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.132862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.132898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.132931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.132965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.132997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.133031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.133064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.133098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.133133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.133166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.133200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.133234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.133270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.133306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.133341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.133377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.133413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.133447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.133477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.133510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.947 [2024-12-05 20:50:52.133544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.133580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.133616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.133648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.133684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.133728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.133763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.133796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.133830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.133869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.133903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.133931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.133963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.133998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.134167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.134202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.134235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.134274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.134311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.134346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.134382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.134416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.134450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.134481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.134520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.134562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.134603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.134642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.134685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.134722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.134763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.134806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.134841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.134870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.134909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.134944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.134975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.135014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.135045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.135082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.135117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.135151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.135183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.135215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.135250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.135284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.135321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.135361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.135399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.135430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.135467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.135501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.135536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.135570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.135602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.135637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.135667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.135694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.135728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.135769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.135813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.135853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.135895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.135935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.135973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.136025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.136067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.136111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.136151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.136194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.136248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.136286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.136340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.136378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.136420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.136462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.136500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.136955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.137001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.137039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.137083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.137123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.137166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.137205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.137245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.137286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.137326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.137369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.137407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.137442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.137477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.137512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.948 [2024-12-05 20:50:52.137555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.137592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.137625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.137658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.137694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.137727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.137757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.137791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.137827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.137864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.137895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.137932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.137967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.138001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.138035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.138078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.138116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.138153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.138187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.138213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.138249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.138281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.138315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.138351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.138390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.138429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.138461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.138499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.138534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.138568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.138604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.138642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.138683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.138718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.138760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.138801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.138842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.138882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.138921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.138963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.138999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.139039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.139085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.139130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.139173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.139217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.139255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.139298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.139338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.140061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.140107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.140142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.140183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.140221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.140264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.140309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.140351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.140392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.140434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.140474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.140524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.140560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.140594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.140628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.140663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.140698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.140732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.140766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.140800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.140833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.140868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.140901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.140934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.140969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.949 [2024-12-05 20:50:52.141005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.141040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.141078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.141117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.141152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.141187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.141216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.141253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.141288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.141323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.141348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.141382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.141412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.141447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.141490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.141528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.141565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.141605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.141652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.141691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.141734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.141779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.141812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.141852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.141893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.141931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.141978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.142016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.142071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.142112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.142150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.142189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.142234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.142273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.142316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.142373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.142414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.142455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.142489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.142662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.142704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.142751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.142793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.142834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.142885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.142922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.142962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.143003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.143040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.143089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.143128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.143170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.143214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.143251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.143292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.143640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.143676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.143709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.143744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.143778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.143812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.143847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.143883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.143925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.143960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.143996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.144034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.144075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.144111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.144147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.144184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.144224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.144258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.144295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.144329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.144364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.144402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.144439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.144472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.144505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.144537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.144570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.144607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.144640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.144675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.144709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.144751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.144790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.144831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.144878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.144919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.144962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.145005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.950 [2024-12-05 20:50:52.145047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.145094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.145132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.145185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.145229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.145270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.145313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.145354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.145397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.145440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.145481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.145509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.145543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.145578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.145610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.145653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.145693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.145729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.145764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.145791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.145830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.145868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.145902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.145938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.145972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.146007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.146185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.146227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.146268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.146303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.146339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.146375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.146411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.146444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.146482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.146509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.146545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.146576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.146621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.146659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.146702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.146743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.146787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.146832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.146875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.146918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.146960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.146998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.147037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.147085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.147124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.147168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.147207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.147249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.147293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.147342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.147387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.147429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.147481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.147521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.147564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.147611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.147650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.147693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.147743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.147783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.147825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.147873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.147913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.147956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.147996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.148037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.148083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.148786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.148829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.148870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.148912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.148951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.148993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.149036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.149082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.149118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.149153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.149187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.149221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.149256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.149292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.149327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.149368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.149408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.149437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.149475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.951 [2024-12-05 20:50:52.149511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.149548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.149590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.149626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.149659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.149695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.149731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.149773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.149813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.149848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.149879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.149912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.149949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.149990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.150031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.150069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.150112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.150148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.150182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.150223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.150259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.150294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.150330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.150360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.150395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.150429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.150461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.150499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.150534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.150570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.150604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.150638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.150673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.150706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.150742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.150778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.150813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.150846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.150888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.150925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.150968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.151009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.151056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.151103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.151143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.151309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.151346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.151382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.151420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.151457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.151498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.151528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.151564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.151601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.151639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.151680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.151716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.151750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.151786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.151823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.151858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.151894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.151928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.151962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.151999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.152034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.152078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.152112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.152149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.152181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.152207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.152243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.152286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.152328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.152367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.152410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.152456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.152497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.152541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.152582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.152632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.152672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.152716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.152757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.152799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.152839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.152878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.152924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.152965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.153003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.952 [2024-12-05 20:50:52.153046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.153092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.153135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.153180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.153219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.153257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.153309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.153351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.153394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.153448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.153490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.153530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.153572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.153611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.153651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.153702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.153744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.153785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.154497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.154542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.154581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.154620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.154661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.154692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.154726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.154759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.154797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.154831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.154870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.154905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.154941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.154978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.155014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.155053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.155097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.155136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.155174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.155219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.155261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.155302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.155347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.155389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.155427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.155473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.155514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.155554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.155599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.155642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.155682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.155726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.155770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.155811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.155850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.155890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.155928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.155969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.156008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.156049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.156094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.156136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.156178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.156222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.156263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.156307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.156349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.156391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.156435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.156474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.156519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.156549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.156585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.156618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.156652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.156686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.156726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.156759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.156798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.156840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.156881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.156917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.156954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.156987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.157149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.157183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.157217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.157253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.157288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.157325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.157365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 20:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:58.953 [2024-12-05 20:50:52.157400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.157436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.157471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.953 [2024-12-05 20:50:52.157509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.157543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 20:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:58.954 [2024-12-05 20:50:52.157581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.157618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.157651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.157688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.157722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.158106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.158149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.158195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.158235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.158279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.158333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.158374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.158415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.158452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.158486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.158522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.158553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.158589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.158626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.158661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.158695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.158729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.158765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.158801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.158843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.158884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.158928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.158975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.159017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.159068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.159110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.159157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.159198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.159238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.159286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.159327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.159366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.159419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.159458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.159499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.159541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.159580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.159621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.159666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.159705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.159747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.159801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.159843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.159887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.159931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.159973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.160014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.160053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.160100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.160143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.160189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.160230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.160275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.160316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.160359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.160398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.160440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.160488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.160528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.160568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.160609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.160649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.160688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.160732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.160905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.160944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.160986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.161021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.161066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.161110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.161138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.161172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.954 [2024-12-05 20:50:52.161209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.161250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.161288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.161325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.161363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.161401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.161436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.161469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.161506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.161546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.161587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.161617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.161652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.161688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.161723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.161759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.161794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.161833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.161869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.161907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.161943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.161980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.162025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.162056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.162094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.162124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.162158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.162197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.162234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.162271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.162303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.162342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.162376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.162410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.162447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.162480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.162516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.162552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.163072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.163118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.163158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.163190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.163223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.163263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.163303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.163350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.163392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.163436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.163478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.163518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.163559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.163602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.163647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.163689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.163735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.163777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.163820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.163862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.163901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.163942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.163982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.164019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.164064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.164107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.164149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.164188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.164233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.164273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.164315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.164356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.164395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.164433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.164472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.164512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.164555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.164598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.164646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.164687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.164728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.164769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.164805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.164846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.164885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.164921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.164958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.164992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.165027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.165073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.165110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.165149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.165178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.165217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.165248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.955 [2024-12-05 20:50:52.165285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.165322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.165365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.165401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.165435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.165469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.165504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.165538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.165574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.165742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.165781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.165817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.165853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.165891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.165927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.165965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.166004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.166048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.166096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.166138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.166184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:58.956 [2024-12-05 20:50:52.166223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.166263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.166314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.166354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.166400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.167029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.167080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.167121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.167161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.167202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.167253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.167296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.167338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.167394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.167435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.167479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.167526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.167567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.167597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.167632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.167667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.167703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.167738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.167783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.167819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.167854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.167888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.167923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.167961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.168000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.168037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.168072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.168109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.168144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.168180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.168214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.168255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.168296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.168333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.168373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.168409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.168446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.168485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.168524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.168557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.168594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.168632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.168671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.168709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.168744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.168781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.168819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.168854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.168889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.168925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.168962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.168996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.169027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.169065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.169104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.169143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.169182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.169222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.169258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.169291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.169330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.169374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.169412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.169455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.169616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.169656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.956 [2024-12-05 20:50:52.169708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.169748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.169795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.169842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.169884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.169928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.169975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.170014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.170054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.170103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.170143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.170188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.170229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.170270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.170311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.170368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.170407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.170452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.170499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.170541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.170584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.170631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.170671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.170713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.170762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.170802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.170840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.170884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.170924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.170963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.171013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.171053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.171098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.171140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.171178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.171221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.171267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.171307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.171350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.171389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.171429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.171472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.171511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.171552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.171601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.171641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.171681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.171722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.171758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.171795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.171830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.171869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.171916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.171947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.171981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.172019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.172068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.172103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.172140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.172185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.172224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.172668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.172716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.172756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.172792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.172829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.172865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.172893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.172932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.172964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.173004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.173036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.173078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.173118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.173155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.173188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.173226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.173267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.173305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.173341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.173373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.173408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.173441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.173478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.173512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.173550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.173576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.173615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.173649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.173677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.173710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.173745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.173781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.957 [2024-12-05 20:50:52.173823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.173866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.173912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.173951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.173991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.174031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.174078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.174122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.174172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.174210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.174249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.174291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.174335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.174378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.174427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.174469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.174508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.174556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.174591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.174628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.174671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.174705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.174740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.174774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.174810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.174853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.174891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.174928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.174966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.175002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.175037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.175077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.175877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.175927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.175965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.176012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.176063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.176107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.176144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.176187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.176228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.176269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.176311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.176350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.176392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.176432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.176474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.176511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.176553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.176590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.176631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.176681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.176724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.176767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.176817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.176855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.176899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.176942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.176982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.177022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.177071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.177110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.177154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.177195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.177233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.177279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.177320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.177363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.177410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.177456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.177501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.177543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.177580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.177625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.177662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.177697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.177734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.177769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.177811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.177838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.177872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.177907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.177942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.177981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.178015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.178051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.178097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.178135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.178170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.178206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.178242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.178273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.958 [2024-12-05 20:50:52.178310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.178343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.178381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.178420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.178581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.178615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.178651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.178684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.178715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.178756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.178792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.178829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.178866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.178901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.178937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.178970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.178999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.179035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.179079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.179105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.179140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.179175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.179208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.179247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.179282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.179320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.179357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.179393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.179433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.179477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.179522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.179560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.179598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.179648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.179688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.179733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.179773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.179812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.179852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.179890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.179932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.179975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.180015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.180054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.180097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.180136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.180170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.180201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.180226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.180261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.180296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.180331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.180371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.180411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.180461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.180503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.180545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.180585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.180626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.180668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.180709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.180749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.180787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.180837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.180877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.180917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.180962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.181431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.181480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.181523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.181565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.181602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.181645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.181689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.181729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.181774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.181811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.181850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.181894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.181933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.181973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.182019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.182064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.182106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.959 [2024-12-05 20:50:52.182153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.182192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.182232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.182286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.182323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.182359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.182400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.182439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.182474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.182510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.182545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.182583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.182622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.182664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.182701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.182728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.182767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.182810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.182849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.182888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.182924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.182963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.182996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.183035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.183082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.183118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.183153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.183182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.183217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.183253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.183288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.183328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.183364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.183401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.183433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.183468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.183503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.183541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.183575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.183611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.183645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.183678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.183714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.183748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.183786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.183822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.183860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.184567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.184611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.184656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.184700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.184736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.184766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.184802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.184838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.184872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.184907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.184946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.184986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.185024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.185051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.185094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.185133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.185169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.185211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.185250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.185287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.185319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.185364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.185401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.185436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.185469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.185504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.185540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.185577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.185613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.185648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.185683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.185720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.185747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.185781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.185808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.185855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.185896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.185942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.185982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.186025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.186072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.186113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.186157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.186213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.186256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.186295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.960 [2024-12-05 20:50:52.186341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.186380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.186418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.186460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.186514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.186554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.186596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.186636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.186676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.186717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.186760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.186809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.186849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.186889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.186928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.186967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.187008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.187052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.187225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.187266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.187310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.187352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.187393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.187435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.187479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.187522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.187562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.187606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.187648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.187692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.187742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.187782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.187824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.187875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.188247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.188297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.188334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.188377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.188418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.188454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.188489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.188530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.188567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.188596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.188630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.188663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.188699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.188737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.188773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.188814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.188867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.188902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.188937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.188976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.189011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.189048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.189089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.189125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.189162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.189197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.189233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.189267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.189300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.189344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.189385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.189423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.189460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.189493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.189523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.189557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.189593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.189627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.189665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.189699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.189736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.189772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.189806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.189842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.189878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.189912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.189948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.189988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.190027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.190073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.190115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.190154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.190195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.190235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.190279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.190321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.190350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.190386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.190422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.961 [2024-12-05 20:50:52.190459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.190493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.190532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.190568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.190605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.190768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.190795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.190827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.190865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.190896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.190933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.190971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.191007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.191041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.191084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.191121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.191153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.191191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.191230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.191271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.191310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.191351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.191397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.191434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.191475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.191518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.191560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.191603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.191648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.191693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.191731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.191773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.191819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.191862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.191903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.191946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.191987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.192030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.192077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.192118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.192156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.192197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.192245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.192285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.192327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.192380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.192417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.192455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.192491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.192529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.192565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.192608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.193289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.193333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.193368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.193407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.193444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.193476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.193514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.193550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.193592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.193633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.193674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.193715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.193759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.193816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.193853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.193895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.193938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.193978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.194020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.194063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.194105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.194144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.194200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.194242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.194280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.194325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.194366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.194408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.194453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.194496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.194537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.194577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.194620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.194664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.194703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.194746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.194791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.194830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.194873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.194911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.194953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.962 [2024-12-05 20:50:52.195000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.195037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.195084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.195122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.195159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.195192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.195226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.195261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.195294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.195329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.195365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.195405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.195440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.195474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.195508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.195551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.195588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.195626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.195661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.195687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.195723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.195762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.195799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.195960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.195995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.196029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.196074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.196112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.196145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.196187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.196224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.196258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.196291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.196327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.196365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.196391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.196424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.196459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.196484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.196521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.196556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.196592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.196634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.196673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.196714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.196759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.196799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.196839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.196879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.196920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.196963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.197011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.197050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.197096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.197135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.197177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.197594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.197640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.197681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.197723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.197762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.197800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.197834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.197879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.197913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.197957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.197998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.198035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.198077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.198115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.198143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.198180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.198215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.198259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.198298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.198332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.198369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.198400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.198436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.198470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.198505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.198545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.963 [2024-12-05 20:50:52.198579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.198616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.198651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.198689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.198726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.198761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.198794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.198829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.198863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.198905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.198946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.198985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.199026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.199070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.199113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.199154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.199197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.199238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.199281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.199325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.199365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.199407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.199460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.199502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.199544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.199588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.199633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.199676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.199721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.199766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.199806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.199855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.199891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.199936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.199977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.200015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.200055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.200104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.200268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.200314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.200357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.200396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.200434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.200467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.200509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.200545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.200588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.200628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.200671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.200712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.200744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.200779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.200816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.200850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.200891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.200927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.200963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.200998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.201039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.201082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.201124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.201161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.201196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.201231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.201270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.201305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.201341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.201380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.202052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.202100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.202142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.202179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.202220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.202262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.202301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.202340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.202384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.202421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.202461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.202513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.202552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.202600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.202645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.202687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.202727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.202776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.202814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.202861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.202905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.202947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.202989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.203032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.203076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.964 [2024-12-05 20:50:52.203119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.203157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.203197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.203241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.203279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.203321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.203365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.203403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.203444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.203484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.203526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.203563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.203598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.203632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.203671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.203708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.203745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.203779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.203815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.203847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.203888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.203927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.203972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.204005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.204040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.204080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.204118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.204157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.204196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.204235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.204267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.204304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.204350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.204386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.204417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.204451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.204483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.204521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.204556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.204717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.204751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.204785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.204823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.204858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.204895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.204927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.204966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.205006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.205050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.205095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.205139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.205175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.205217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.205262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.205307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.205345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.205393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.205436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.205478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.205518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.205562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.205607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.205647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.205692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.205732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.205775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.205804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.205839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.205877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.205912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.205949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.205986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.206376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.206420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.206455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.206492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.206530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.206567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.206600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.206637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.206672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.206710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.206748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.206784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.206819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.206853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.206887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.206930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.206968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.207017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.207061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.965 [2024-12-05 20:50:52.207103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.207147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.207193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.207235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.207286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.207323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.207362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.207409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.207452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.207492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.207532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.207579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.207619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.207661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.207701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.207742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.207785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.207829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.207879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.207921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.207961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.208006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.208045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.208091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.208134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.208175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.208218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.208260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.208302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.208343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.208395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.208434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.208472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.208517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.208560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.208599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.208639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.208681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.208717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.208752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.208789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.208823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.208866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.208904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.208940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.209107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.209150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.209194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.209238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.209276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.209312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.209346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.209385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.209422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.209456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.209490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.209530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.209566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.209602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.209637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.209674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.209710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.209751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.209789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.209826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.209860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.209898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.209936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.209977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.210023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.210064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.210111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.210154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.210196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.210235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.210898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.210944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.210984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.211023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.211052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.211094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.211134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.211168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.211203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.966 [2024-12-05 20:50:52.211238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.211283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.211325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.211362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.211394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.211431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.211458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.211495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.211530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.211570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.211609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.211652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.211688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.211721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.211756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.211792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.211825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.211859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.211892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.211928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.211964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.212002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.212036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.212078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.212112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.212146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.212174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.212211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.212245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.212274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.212313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.212353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.212394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.212434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.212477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.212518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.212561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.212615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.212656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.212696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.212741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.212779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.212822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.212863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.212904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.212944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.212986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.213030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.213073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.213114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.213163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.213201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.213242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.213293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.213337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.213501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.213542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.213582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.213622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.213660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:58.967 [2024-12-05 20:50:52.213700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.213744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.213785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.213827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.213874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.213912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.213951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.213999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.214038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.214083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.214129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.214170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.214212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.214260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.214301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.214339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.214378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.214414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.967 [2024-12-05 20:50:52.214452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.214489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.214523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.214558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.214596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.214639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.214681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.214709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.214744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.214778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.214813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.214849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.214885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.214919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.214956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.214997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.215040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.215077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.215115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.215148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.215178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.215213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.215248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.215282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.215318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.215355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.215391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.215423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.215459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.215497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.215534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.215575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.215615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.215649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.215685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.215722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.215758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.215794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.215832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.215868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.216339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.216387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.216426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.216469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.216510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.216545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.216575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.216611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.216645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.216679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.216712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.216747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.216779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.216811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.216846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.216877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.216913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.216948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.216993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.217029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.217071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.217106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.217142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.217181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.217216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.217256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.217292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.217327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.217364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.217402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.968 [2024-12-05 20:50:52.217439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.217466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.217504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.217529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.217553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.217578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.217602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.217626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.217661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.217697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.217730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.217764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.217802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.217843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.217886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.217930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.217971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.218016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.218071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.218110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.218151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.218193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.218232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.218271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.218317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.218355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.218397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.218439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.218487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.218530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.218573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.218615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.218662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.218704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.219430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.219470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.219505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.219540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.219574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.219615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.219652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.219688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.219726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.219762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.219793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.219829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.219865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.219900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.219936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.219969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.220006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.220041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.220078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.220115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.220154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.220187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.220215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.220252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.220291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.220325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.220365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.220400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.220431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.220464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.220501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.220536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.220574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.220617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.220659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.220709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.220749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.220792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.220844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.220888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.220927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.220970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.221008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.221049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.221107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.969 [2024-12-05 20:50:52.221147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.221187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.221229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.221266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.221307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.221356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.221397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.221437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.221479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.221523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.221562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.221609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.221658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.221699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.221740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.221783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.221821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.221860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.221904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.222078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.222122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.222162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.222191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.222227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.222260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.222293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.222326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.222361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.222397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.222432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.222473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.222514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.222551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.222586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.222619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.222649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.222681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.222718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.222758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.222796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.222835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.222872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.222908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.222942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.222979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.223022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.223056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.223096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.223133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.223167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.223200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.223236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.223272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.223307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.223342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.223380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.223413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.223448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.223475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.223511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.223538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.223573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.223610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.223651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.223693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.223740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.223779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.223819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.223867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.223906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.223946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.223985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.224026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.224070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.224117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.224158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.224206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.224245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.224289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.224332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.224372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.224413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.224869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.224912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.224952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.970 [2024-12-05 20:50:52.224994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.225041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.225087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.225132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.225175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.225212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.225253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.225288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.225324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.225358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.225398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.225433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.225468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.225509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.225553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.225589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.225618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.225655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.225691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.225726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.225767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.225802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.225839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.225878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.225914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.225952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.225989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.226024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.226051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.226093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.226122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.226159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.226194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.226229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.226266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.226302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.226337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.226374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.226407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.226445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.226478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.226524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.226580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.226622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.226663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.226705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.226761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.226801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.226841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.226883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.226922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.226964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.227004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.227043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.227093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.227136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.227182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.227220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.227265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.227302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.227335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.228063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.228104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.228140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.228178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.228214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.228251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.228289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.228323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.228358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.228388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.228421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.228456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.228496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.228534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.228572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.228614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.228655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.228696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.228749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.228786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.228824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.228869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.228914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.228954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.229007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.229048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.229093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.229134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.229171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.229214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.229256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.229296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.971 [2024-12-05 20:50:52.229337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.229378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.229415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.229452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.229486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.229522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.229565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.229607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.229644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.229685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.229722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.229758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.229792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.229827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.229861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.229900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.229937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.229970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.230009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.230047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.230086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.230120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.230159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.230194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.230231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.230270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.230313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.230356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.230402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.230443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.230487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.230675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.230720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.230759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.230799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.230846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.230892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.230936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.230981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.231021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.231068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.231111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.231160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.231199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.231241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.231282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.231324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.231368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.231406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.231448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.231492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.231533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.231575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.231618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.231667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.231711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.231749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.231794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.231835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.231875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.231927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.231968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.232008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.232052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.232102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.232143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.232189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.232231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.232274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.232321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.232362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.232403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.232446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.232485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.232528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.232574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.232610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.232650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.232684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.232717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.232757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.232790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.232821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.232857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.232891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.972 [2024-12-05 20:50:52.232926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.232963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.233005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.233043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.233083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.233119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.233161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.233202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.233235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.233266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.233972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.234015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.234050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.234095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.234137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.234172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.234205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.234241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.234278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.234313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.234349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.234385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.234412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.234447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.234482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.234507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.234544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.234581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.234619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.234658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.234709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.234751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.234795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.234836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.234876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.234921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.234963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.235004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.235047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.235098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.235138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.235179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.235221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.235260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.235302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.235343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.235384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.235424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.235459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.235496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.235531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.235571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.235608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.235645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.235679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.235713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.235749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.235793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.235821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.235860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.235892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.235926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.235964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.236001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.236037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.236076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.236113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.236149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.236182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.236219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.236256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.236291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.236330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.236534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.236580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.236622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.236664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.236709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.236750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.236794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.236834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.236875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.236916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.236961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.973 [2024-12-05 20:50:52.237002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.237044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.237092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.237133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.237177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.237215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.237257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.237302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.237340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.237382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.237423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.237463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.237505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.237548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.237587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.237630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.237671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.237709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.237751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.237796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.237834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.237875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.237919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.237959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.237999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.238049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.238099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.238141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.238183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.238224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.238262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.238304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.238350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.238391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.238428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.238464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.238498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.238530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.238568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.238613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.238653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.238691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.238735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.238772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.238811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.238847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.238884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.238922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.238963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.238999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.239026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.239069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.239109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.239833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.239872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.239908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.239946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.239984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.240025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.240063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.240098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.240131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.240159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.240195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.240228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.240258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.240290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.240328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.240369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.240411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.240449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.240494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.240535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.240581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.240623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.240663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.240704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.240745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.240794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.240833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.974 [2024-12-05 20:50:52.240873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.240915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.240957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.240998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.241049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.241096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.241138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.241182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.241220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.241258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.241305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.241351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.241391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.241433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.241472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.241507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.241543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.241580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.241621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.241659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.241693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.241728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.241761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.241794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.241823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.241861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.241896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.241931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.241974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.242015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.242044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.242081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.242119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.242158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.242195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.242236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.242420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.242463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.242499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.242533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.242578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.242617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.242661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.242700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.242745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.242788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.242830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.242871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.242910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.242953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.243011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.243051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.243097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.243139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.243181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.243220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.243263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.243303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.243343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.243384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.243425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.243467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.243509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.243550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.243592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.243634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.243691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.243730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.243773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.243820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.243859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.243907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.243945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.243989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.244031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.244079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.244120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.244163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.244203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.244240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.244294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.244332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.244374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.244414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.244451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.975 [2024-12-05 20:50:52.244487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.244514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.244549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.244589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.244624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.244662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.244697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.244735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.244771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.244805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.244850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.244893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.244929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.244964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.244991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.245730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.245771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.245808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.245844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.245881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.245918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.245955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.245989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.246026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.246065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.246094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.246129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.246165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.246198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.246238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.246276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.246316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.246356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.246400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.246440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.246491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.246532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.246575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.246624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.246660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.246699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.246744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.246782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.246824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.246870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.246909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.246951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.246995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.247043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.247089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.247130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.247173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.247212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.247253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.247304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.247344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.247391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.247437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.247479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.247517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.247564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.247604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.247637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.247678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.247719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.247755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.247794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.247830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.247868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.247914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.247949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.247982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.248010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.248047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.248091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.248127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.248162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.248199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.248360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.248403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.248442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.248480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.248516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.248554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.248590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.248624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.248659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.248694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.248731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.248766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.248806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.248839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.248881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.976 [2024-12-05 20:50:52.248920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.248969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.249008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.249050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.249096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.249137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.249180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.249227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.249266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.249311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.249354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.249396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.249437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.249478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.249517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.249557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.249602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.249640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.249682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.249727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.249766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.249806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.249851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.249895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.249937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.249982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.250025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.250068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.250112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.250155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.250199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.250238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.250283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.250321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.250362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.250403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.250446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.250488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.250532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.250586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.250631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.250660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.250699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.250736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.250770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.250806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.250842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.250878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.250919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.251623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.251662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.251694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.251730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.251769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.251801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.251837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.251875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.251910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.251947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.251982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.252021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.252055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.252103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.252143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.252177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.252204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.252239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.252276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.252309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.252348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.252386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.252429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.252474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.252514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.252552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.252597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.252637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.252676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.252720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.252764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.252804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.252848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.252889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.252930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.252973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.977 [2024-12-05 20:50:52.253017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.253064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.253105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.253148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.253187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.253233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.253274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.253312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.253353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.253390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.253434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.253484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.253527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.253571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.253611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.253655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.253697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.253735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.253775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.253815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.253850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.253884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.253922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.253955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.253990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.254023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.254065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.254244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.254284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.254319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.254356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.254394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.254432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.254469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.254505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.254536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.254572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.254603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.254641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.254678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.254714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.254750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.254791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.254831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.254864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.254905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.254941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.254975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.255013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.255051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.255094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.255138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.255179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.255220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.255261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.255304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.255344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.255388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.255430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.255473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.255512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.255553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.255593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.255634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.255678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.255718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.255759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.255803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.255843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.255886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.255929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.255972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.256015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.256070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.256111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.256151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.256198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.256243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.256285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.256329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.256371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.256410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.256453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.256495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.256535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.256578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.256616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.256656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.256702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.256738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.256775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.257457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.257497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.978 [2024-12-05 20:50:52.257536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.257573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.257607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.257638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.257672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.257711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.257747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.257786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.257824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.257861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.257896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.257931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.257964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.258005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.258042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.258072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.258109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.258140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.258165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.258199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.258231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.258272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.258317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.258359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.258401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.258446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.258487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.258526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.258572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.258615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.258653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.258697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.258738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.258778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.258822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.258866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.258904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.258946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.258991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.259030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.259075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.259115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.259150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.259187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.259231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.259270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.259308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.259344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.259385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.259422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.259458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.259490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.259527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.259562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.259598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.259637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.259672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.259711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.259748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.259784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.259821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:58.979 [2024-12-05 20:50:52.259995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.260035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.260071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.260117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.260156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.260198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.260243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.260281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.260325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.260379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.260419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.260462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.979 [2024-12-05 20:50:52.260522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.260564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.260606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.260658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.260700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.260740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.260791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.260834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.260876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.260919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.260960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.260999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.261043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.261089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.261131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.261175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.261215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.261255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.261305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.261344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.261388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.261428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.261466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.261513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.261555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.261595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.261637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.261680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.261725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.261768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.261811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.261853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.261892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.261931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.261987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.262028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.262066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.262095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.262132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.262168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.262203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.262239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.262271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.262308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.262343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.262378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.262417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.262461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.262500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.262536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.262563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.262601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.263356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.263395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.263432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.263466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.263499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.263531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.263572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.263606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.263644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.263671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.263705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.263738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.263766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.263796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.263836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.263875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.263919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.263961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.264003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.264044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.264089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.264136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.264176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.264220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.264260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.264299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.264339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.264384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.264422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.264462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.264505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.264543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.264582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.264623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.264665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.264709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.264749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.264790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.264832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.264870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.980 [2024-12-05 20:50:52.264912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.264966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.265003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.265044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.265091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.265130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.265167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.265202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.265241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.265277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.265319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.265357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.265394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.265429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.265473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.265501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.265537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.265573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.265610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.265651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.265688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.265724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.265757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.265960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.265999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.266033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.266075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.266126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.266163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.266198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.266236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.266272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.266306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.266338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.266387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.266428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.266466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.266519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.266559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.266599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.266644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.266687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.266738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.266779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.266820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.266861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.266902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.266945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.266983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.267024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.267070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.267113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.267154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.267195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.267239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.267276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.267317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.267361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.267403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.267445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.267496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.267537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.267579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.267621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.267659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.267702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.267738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.267777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.267830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.267870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.267911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.267960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.267996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.268038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.268087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.268128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.268172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.268210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.268243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.268285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.268320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.268354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.268386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.268418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.268452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.268486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.268526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.269219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.269261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.269294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.269329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.269364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.269399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.981 [2024-12-05 20:50:52.269436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.269472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.269505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.269540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.269576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.269615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.269652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.269687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.269727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.269755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.269789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.269824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.269853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.269886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.269926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.269967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.270010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.270052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.270095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.270142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.270184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.270226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.270267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.270309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.270347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.270386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.270428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.270471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.270514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.270559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.270602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.270647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.270691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.270730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.270769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.270808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.270851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.270890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.270931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.270973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.271015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.271062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.271104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.271145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.271191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.271228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.271267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.271305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.271337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.271372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.271409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.271455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.271492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.271529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.271569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.271613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.271650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.271838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.271889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.271925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.271960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.272000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.272037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.272079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.272116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.272151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.272186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.272224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.272258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.272294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.272333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.272369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.272401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.272437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.272474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.272516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.272557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.272599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.272639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.272679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.272724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.272765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.272805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.272850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.272888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.272931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.272976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.273024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.273070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.273116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.273158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.273209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.273250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.982 [2024-12-05 20:50:52.273291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.273333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.273383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.273423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.273467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.273512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.273559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.273596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.273638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.273684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.273727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.273767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.273806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.273848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.273890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.273933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.273975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.274015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.274062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.274108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.274153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.274191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.274235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.274271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.274298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.274334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.274369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.274403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.275133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.275172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.275207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.275235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.275271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.275305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.275339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.275373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.275411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.275445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.275482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.275516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.275551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.275585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.275622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.275661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.275699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.275736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.275774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.275801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.275839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.275879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.275907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.275950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.275986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.276031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.276078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.276117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.276158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.276201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.276248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.276292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.276330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.276375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.276419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.276457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.276497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.276551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.276597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.276640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.276682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.276731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.276773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.276813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.276856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.276901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.276942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.276984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.277036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.277087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.277129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.983 [2024-12-05 20:50:52.277171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.277213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.277252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.277299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.277336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.277378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.277423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.277462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.277496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.277530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.277573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.277615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.277797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.277842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.277880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.277916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.277952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.277992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.278028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.278073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.278111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.278150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.278180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.278219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.278251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.278289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.278323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.278362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.278402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.278439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.278475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.278508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.278546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.278581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.278613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.278654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.278696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.278734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.278776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.278815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.278860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.278913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.278950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.278992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.279041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.279087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.279130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.279174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.279212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.279252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.279297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.279339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.279382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.279425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.279481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.279521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.279563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.279607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.279649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.279690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.279732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.279782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.279824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.279866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.279914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.279963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.280004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.280044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.280087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.280130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.280172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.280212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.280255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.280299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.280340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.280378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.281069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.281115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.281153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.984 [2024-12-05 20:50:52.281190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.281224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.281259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.281297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.281329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.281370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.281409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.281447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.281485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.281526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.281565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.281606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.281641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.281673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.281704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.281740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.281764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.281794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.281828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.281863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.281904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.281940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.281984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.282028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.282078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.282121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.282166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.282305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.282348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.282392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.282515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.282556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.282596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.282633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.282668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.282703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.282741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.282786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.282822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.282858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.282894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.282931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.282968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.283001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.283034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.283071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.283103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.283140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.283176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.283216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.283250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.283289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.283326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.283366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.283402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.283439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.283474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.283509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.283546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.283583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.283764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.283812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.283853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.283900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.283942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.283984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.284025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.284073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.284114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.284158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.284199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.284239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.284280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.284329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.284367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.284409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.284456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.284500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.284542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.284585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.284625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.284666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.284709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.284756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.284798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.284838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.284886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.284928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.985 [2024-12-05 20:50:52.284970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.285012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.285055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.285102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.285142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.285186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.285229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.285272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.285315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.285361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.285401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.285440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.285480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.285514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.285546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.285581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.285622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.285661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.285699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.285739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.285778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.285817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.285850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.285886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.285924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.285961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.285994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.286031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.286072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.286109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.286151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.286189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.286224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.286260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.286306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.286343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.287174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.287219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.287257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.287284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.287317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.287359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.287402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.287441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.287481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.287528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.287570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.287612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.287664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.287706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.287746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.287790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.287832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.287869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.287921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.287963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.288004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.288049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.288094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.288136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.288176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.288230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.288267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.288306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.288350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.288390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.288432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.288475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.288514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.288555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.288595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.288631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.288666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.288709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.288744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.288780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.288815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.288854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.288892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.288928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.288963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.288992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.289026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.289068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.289104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.289140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.289183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.289225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.289262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.986 [2024-12-05 20:50:52.289290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.289324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.289361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.289399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.289435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.289471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.289509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.289546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.289586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.289622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.289659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.289825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.289867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.289909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.289962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.290003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.290043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.290089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.290129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.290169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.290209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.290257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.290294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.290336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.290379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.290422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.290461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.290512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.290554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.290594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.290639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.290682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.290724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.290764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.290806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.290846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.290889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.290936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.290978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.291021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.291073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.291114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.291155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.291198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.291238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.291281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.291319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.291372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.291415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.291454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.291501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.291543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.291584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.291623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.291665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.291702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.291731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.291766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.291803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.291839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.291880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.291913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.291949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.291991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.292029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.292069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.292106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.292143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.292181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.292213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.292252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.292288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.292324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.292365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.292821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.292862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.292898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.292935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.292968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.293000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.293036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.293066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.293107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.293146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.293185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.293224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.293265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.293306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.293348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.293391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.293433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.293474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.293518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.293561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.293602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.293643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.293684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.293729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.293770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.293816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.987 [2024-12-05 20:50:52.293859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.293898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.293944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.293982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.294022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.294069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.294108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.294146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.294182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.294217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.294251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.294286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.294330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.294367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.294404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.294448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.294479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.294516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.294553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.294587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.294625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.294660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.294696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.294732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.294766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.294802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.294836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.294873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.294908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.294942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.294976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.295014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.295047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.295092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.295132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.295174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.295216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.295263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.295988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.296031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.296076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.296123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.296162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.296204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.296246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.296295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.296339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.296380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.296422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.296472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.296510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.296552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.296603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.296645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.296685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.296717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.296754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.296791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.296827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.296862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.296898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.296937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.296973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.297010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.297045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.297092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.297135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.297170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.297202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.297239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.297272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.297309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.297344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.297381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.297421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.988 [2024-12-05 20:50:52.297456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.297494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.297537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.297574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.297613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.297647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.297686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.297719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.297755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.297792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.297831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.297870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.297908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.297946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.297981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.298017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.298051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.298093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.298129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.298165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.298196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.298229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.298263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.298297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.298349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.298390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.298431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.298592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.298631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.298679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.298722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.298760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.298807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.298847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.298888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.298932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.298971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.299011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.299055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.299112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.299154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.299195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.299238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.299605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.299654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.299699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.299742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.299785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.299825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.299863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.299903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.299940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.299977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.300014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.300052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.300094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.300138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.300177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.300214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.300242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.300281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.300316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.300350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.300386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.300434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.300471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.300508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.300545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.300580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.300624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.300660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.300690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.300729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.300767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.300803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.300841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.300878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.300915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.300953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.300990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.301023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.301066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.301103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.301139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.301176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.301207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.301246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.301288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.301336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.301379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.301420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.301473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.301508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.301550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.989 [2024-12-05 20:50:52.301602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.301640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.301687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.301725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.301767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.301807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.301854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.301894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.301936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.301995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.302048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.302094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.302137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.302304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.302346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.302389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.302438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.302481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.302522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.302565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.302602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.302641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.302688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.302728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.302768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.302813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.302852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.302888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.302917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.302950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.302988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.303023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.303067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.303105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.303138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.303177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.303212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.303250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.303287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.303324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.303359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.303390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.303426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.303463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.303498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.303534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.303579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.303617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.303655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.303692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.303737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.303770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.303807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.303836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.303873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.303910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.303946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.303979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.304014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.304047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:58.990 [2024-12-05 20:50:52.304747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.304797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.304838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.304879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.304923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.304965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.305005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.305047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.305091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.305127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.305168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.305207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.305248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.305289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.305328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.305371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.305411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.305454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.305493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.305539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.305580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.305623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.305667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.305708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.305751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.305792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.305831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.305872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.305919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.305956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.305999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.990 [2024-12-05 20:50:52.306051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.306095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.306133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.306181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.306214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.306248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.306290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.306329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.306369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.306403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.306444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.306478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.306514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.306547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.306589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.306624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.306660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.306701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.306738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.306780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.306816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.306856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.306892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.306921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.306953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.306988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.307022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.307063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.307101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.307133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.307171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.307210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.307245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.307422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.307459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.307497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.307539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.307585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.307627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.307672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.307714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.307755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.307813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.307853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.307893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.307946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.307984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.308024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.308075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.308116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.308157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.308200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.308241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.308281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.308323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.308363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.308402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.308448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.308491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.308532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.308577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.308623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.308664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.308705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.308747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.308792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.309195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.309237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.309272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.309307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.309345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.309384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.309422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.309459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.309498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.309532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.309568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.309609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.309640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.309674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.309710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.309745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.309786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.309825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.309860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.309896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.309939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.309976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.310012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.310048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.310084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.310123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.310157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.310193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.310229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.991 [2024-12-05 20:50:52.310270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.310308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.310343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.310380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.310416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.310454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.310490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.310521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.310560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.310598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.310633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.310664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.310703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.310742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.310772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.310814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.310854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.310894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.310939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.310983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.311029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.311076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.311117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.311162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.311202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.311242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.311288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.311326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.311368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.311410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.311462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.311500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.311542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.311590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.311631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.311792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.311834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.311876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.311923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.311962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.312004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.312046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.312096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.312136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.312178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.312224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.312266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.312310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.312349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.312387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.312424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.312462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.312504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.312544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.992 [2024-12-05 20:50:52.312582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.312622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.312661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.312693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.312727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.312766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.312804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.312841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.312878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.312915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.312959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.313722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.313762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.313800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.313841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.313882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.313923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.313969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.314011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.314051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.314100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.314139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.314182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.314227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.314271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.314314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.314356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.314403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.314442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.314483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.314523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.314575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.314621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.314662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.314702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.314750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.314791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.314831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.314872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.314922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.314961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.315005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.315044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.315094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.315138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.315179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.315223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.315262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.315303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.315347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.315386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.315427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.315465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.315495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.315534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.315573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.315608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.315644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.315682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.315721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.315760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.315800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.315833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.315869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.315912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.315951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.315982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.316018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.316055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.316096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.316139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.316175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.316209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.316246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.316282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.316423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.316462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.316500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.316536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.316573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.316611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.316650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.316690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.316724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.316763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.316799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.316835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.316870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.316906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.316943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.316970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.317006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.993 [2024-12-05 20:50:52.317040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.317073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.317111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.317156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.317194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.317238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.317284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.317337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.317377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.317421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.317460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.317503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.317547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.317586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.317633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.317679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.317720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.317762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.317812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.317855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.317898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.317949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.317990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.318031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.318077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.318118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.318161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.318202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.318258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.318297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.318337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.318388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.318433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.318472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.318522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.318567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.318612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.318653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.318691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.318729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.318765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.318799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.318833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.318877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.318915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.318951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.319381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.319425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.319464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.319492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.319527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.319563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.319601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.319638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.319677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.319714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.319752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.319786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.319822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.319858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.319895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.319928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.319965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.320000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.320041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.320087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.320129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.320178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.320222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.320267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.320308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.320352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.320392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.320439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.994 [2024-12-05 20:50:52.320488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.320531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.320574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.320619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.320660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.320701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.320747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.320794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.320836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.320878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.320920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.320964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.321003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.321048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.321094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.321135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.321175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.321216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.321254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.321299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.321344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.321382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.321427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.321471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.321516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.321559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.321600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.321641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.321681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.321712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.321752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.321794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.321826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.321864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.321901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.321942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.322625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.322665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.322699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.322734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.322771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.322808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.322847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.322882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.322921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.322955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.322988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.323025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.323066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.323103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.323137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.323178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.323205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.323244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.323279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.323306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.323348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.323403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.323445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.323484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.323528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.323571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.323614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.323665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.323709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.323750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.323795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.323833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.323873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.323919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.323961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.324001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.324046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.324091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.324133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.324179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.324218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.324262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.324303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.324341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.324385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.324431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.324475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.324517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.324573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.324612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.324651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.324694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.324736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.324777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.324820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.324858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.324903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.995 [2024-12-05 20:50:52.324942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.324978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.325018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.325054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.325096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.325139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.325315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.325360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.325396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.325431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.325470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.325510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.325546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.325581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.325618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.325661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.325696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.325730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.325760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.325797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.325835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.325872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.325911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.325947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.325982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.326018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.326057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.326097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.326131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.326170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.326206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.326242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.326274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.326315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.326356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.326402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.326446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.326488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.326527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.326571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.326615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.326655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.326702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.326746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.326786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.326829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.326867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.326909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.326951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.326997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.327038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.327082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.327129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.327173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.327214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.327258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.327314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.327360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.327400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.327444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.327481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.327522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.327571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.327610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.327649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.327694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.327733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.327772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.327814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.327856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.328558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.328601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.328648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.328690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.328732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.328770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.328803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.328838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.328873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.328907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.328937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.328972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.329009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.329044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.329084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.329122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.329155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.329195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.329230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.329267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.329304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.329341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.996 [2024-12-05 20:50:52.329376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.329411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.329446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.329473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.329510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.329542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.329573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.329608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.329648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.329690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.329733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.329776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.329824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.329866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.329909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.329952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.329991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.330033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.330079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.330120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.330161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.330210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.330253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.330293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.330336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.330377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.330419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.330458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.330502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.330543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.330584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.330631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.330670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.330714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.330753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.330797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.330843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.330883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.330926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.330966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.331015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.331248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.331297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.331334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.331374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.331411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.331445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.331480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.331511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.331542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.331579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.331617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.331654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.331688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.331729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.331767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.331801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.331836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.331874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.331906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.331938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.331968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.332003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.332040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.332085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.332122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.332158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.332194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.332228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.332264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.332299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.332330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.332366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.332398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.332438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.332475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.332516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.332559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.332598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.332639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.332680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.332717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.332759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.332799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.332840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.332886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.332926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.332967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.333010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.333055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.333098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.333141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.333187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.997 [2024-12-05 20:50:52.333225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.333271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.333315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.333355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.333397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.333438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.333483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.333525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.333563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.333615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.333653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.333693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.334406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.334446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.334484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.334520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.334554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.334589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.334617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.334651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.334682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.334719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.334756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.334791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.334828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.334861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.334903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.334946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.334984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.335017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.335051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.335094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.335129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.335169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.335205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.335241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.335273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.335308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.335346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.335380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.335417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.335449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.335486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.335522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.335557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.335593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.335621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.335661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.335698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.335728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.335765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.335804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.335845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.335890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.335932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.335973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.336021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.336065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.336108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.336158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.336197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.336239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.336287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.336325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.336368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.336411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.336452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.336496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.336548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.336588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.336627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.336670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.336709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.336751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.336795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.336832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.336993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.337032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.337078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.337119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.337160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.337209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.337249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.998 [2024-12-05 20:50:52.337290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.337333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.337370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.337409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.337455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.337491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.337527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.337564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.337602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.337973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.338016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.338056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.338102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.338132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.338165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.338197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.338234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.338269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.338305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.338345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.338381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.338418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.338455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.338494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.338529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.338565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.338600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.338636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.338670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.338716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.338759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.338801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.338843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.338888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.338928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.338969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.339012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.339053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.339099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.339138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.339184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.339226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.339265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.339308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.339348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.339391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.339430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.339479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.339519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.339560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.339606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.339645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 true 00:30:58.999 [2024-12-05 20:50:52.339688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.339730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.339781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.339821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.339863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.339907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.339944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.339987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.340029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.340074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.340116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.340157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.340200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.340239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.340281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.340318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.340346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.340381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.340417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.340450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.340485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.340642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.340683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.340718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.340751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.340783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.340811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.340845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.340878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.340915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.340950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.340986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.341021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.341068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.341104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.341143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.341179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.341215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.999 [2024-12-05 20:50:52.341243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.341278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.341308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.341348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.341386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.341423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.341458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.341494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.341530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.341568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.341603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.341641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.341674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.341713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.341750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.341781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.341809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.341843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.341882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.341908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.341948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.341991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.342032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.342077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.342132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.342174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.342214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.342260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.342303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.342346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.343069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.343114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.343170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.343212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.343252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.343296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.343330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.343374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.343415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.343453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.343488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.343525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.343559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.343605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.343646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.343680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.343709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.343745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.343776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.343811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.343847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.343881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.343919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.343962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.343999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.344033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.344070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.344106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.344145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.344181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.344218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.344255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.344293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.344329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.344364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.344402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.344440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.344479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.344516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.344547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.344598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.344637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.344677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.344721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.344767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.344811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.344850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.344894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.344932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.344972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.345018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.345065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.345108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.345153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.345195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.345237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.345284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.345329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.345370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.345411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.345455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.345493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.000 [2024-12-05 20:50:52.345533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.345576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.345740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.345783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.345823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.345870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.345908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.345946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.345986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.346030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.346080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.346127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.346166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.346206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.346254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.346295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.346336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.346378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.346409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.346444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.346479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.346509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.346550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.346592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.346627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.346664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.346700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.346737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.346772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.346807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.346851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.346888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.346916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.346953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.346997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.347041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.347088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.347127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.347165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.347200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.347238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.347274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.347315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.347351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.347383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.347420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.347458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.347495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.347539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.347573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.347610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.347646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.347682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.347716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.347754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.347791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.347830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.347867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.347900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.347940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.347980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.348015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.348041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.348084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.348119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.348570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.348612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.348658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.348696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.348736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.348778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.348818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.348862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.348902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.348942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.348984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.349031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.349078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.349118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.349159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.349202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.349243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.349284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.349328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.349366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.349411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.349449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.349490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.349529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.349574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.349619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.349660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.001 [2024-12-05 20:50:52.349696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.349734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.349773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.349818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.349854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.349890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.349918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.349956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.349992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.350025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.350069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.350110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.350149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.350188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.350224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.350255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.350287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.350319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.350357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.350395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.350430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.350467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.350501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.350540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.350578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.350615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.350650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.350689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.350729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.350767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.350804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.350847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.350885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.350926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.350971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.351014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.351064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:59.002 [2024-12-05 20:50:52.351811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.351852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.351896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.351933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.351973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.352016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.352056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.352100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.352143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.352183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.352223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.352263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.352306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.352350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.352389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.352433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.352473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.352502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.352539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.352573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.352607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.352646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.352687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.352727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.352761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.352797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.352834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.352869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.352905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.352948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.352977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.353011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.353044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.353092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.353139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.353176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.353216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.353249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.353287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.353324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.353359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.353393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.353430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.353459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.353496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.353528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.353563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.353600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.353637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.353670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.353707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.353742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.353777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.353812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.353845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.002 [2024-12-05 20:50:52.353882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.353918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.353955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.353992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.354024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.354055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.354096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.354134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.354167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.354326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.354372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.354413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.354457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.354498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.354543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.354583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.354627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.354669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.354707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.354746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.354791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.354832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.354872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.354917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.354954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.354995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.355038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.355084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.355130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.355178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.355216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.355258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.355296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.355336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.355376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.355417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.355458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.355499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.355540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.355579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.355619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.355661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.355702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.355741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.355783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.355825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.355862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.355897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.355932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.355968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.356003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.356039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.356078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.356113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.356148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.356189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.356230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.356256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.356291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.356327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.356366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.356400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.356435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.356468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.356505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.356538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.356573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.003 [2024-12-05 20:50:52.356610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.356646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.356680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.356711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.356746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.357300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.357342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.357383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.357424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.357463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.357503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.357543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.357587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.357628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.357669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.357708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.357750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.357794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.357841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.357883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.357924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.357968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.358016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.358062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.358103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.358144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.358185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.358232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.358273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.358313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.358353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.358398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.358440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.358480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.358522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.358567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.358610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.358653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.358689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.358728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.358771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.358813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.358858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.358895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.358936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.358975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.359014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.359044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.359087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.359123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.359160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.359196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.359232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.359267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.359304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.359347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.359385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.359422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.359458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.359492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.359524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.359557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.359590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.359624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.359660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.359696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.359732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.359772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.359811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.360295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.360338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.360371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.360404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.360440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.302 [2024-12-05 20:50:52.360472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.360511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.360548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.360581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.360608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.360647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.360682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.360714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.360752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.360792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.360836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.360881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.360923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.360966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.361006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.361043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.361090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.361131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.361173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.361221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.361261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.361305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.361348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.361390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.361429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.361471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.361510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.361552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.361595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.361637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.361679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.361722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.361766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.361808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.361846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.361894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.361933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.361971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.362012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.362050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.362097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.362151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.362191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.362234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.362271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.362311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.362355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.362396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.362431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.362467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.362502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.362537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.362572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.362608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.362645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.362688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.362735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.362769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.363222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.363266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.363304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.363342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.363378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.363415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.363450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.363489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.363525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.363562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 20:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:30:59.303 [2024-12-05 20:50:52.363603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.363642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.363678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.363713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.363745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.363778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.363828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.363867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.363908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 20:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.303 [2024-12-05 20:50:52.363955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.364005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.364043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.364100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.364141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.364180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.364222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.364262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.364305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.364350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.364389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.303 [2024-12-05 20:50:52.364430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.364475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.364518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.364563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.364604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.364646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.364692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.364732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.364774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.364817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.364856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.364905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.364944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.364988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.365027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.365072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.365110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.365153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.365194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.365237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.365277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.365321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.365362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.365404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.365446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.365481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.365509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.365545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.365581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.365620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.365659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.365692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.365729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.365772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.366437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.366480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.366515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.366551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.366590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.366627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.366663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.366701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.366739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.366771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.366806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.366845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.366885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.366921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.366957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.366987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.367018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.367055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.367097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.367139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.367179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.367223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.367264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.367303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.367353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.367396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.367448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.367486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.367527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.367587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.367626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.367666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.367706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.367747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.304 [2024-12-05 20:50:52.367792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.367834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.367875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.367914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.367953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.368009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.368046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.368106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.368150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.368190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.368234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.368271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.368314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.368369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.368410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.368453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.368493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.368535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.368585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.368624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.368665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.368705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.368745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.368788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.368827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.368868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.368902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.368935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.368973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.369174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.369210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.369250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.369283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.369319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.369358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.369404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.369439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.369476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.369512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.369551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.369589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.369631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.369659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.369697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.369733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.369769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.369805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.369844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.369880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.369919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.369957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.369996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.370029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.370070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.370105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.370142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.370177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.370208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.370258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.370296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.370340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.370379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.370422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.370474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.370509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.370553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.370593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.370634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.370678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.370715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.370757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.370801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.370845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.370887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.370938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.370981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.371023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.371074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.371116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.371157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.371199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.371235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.371279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.371323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.371365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.371407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.371446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.371491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.371533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.371575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.305 [2024-12-05 20:50:52.371617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.371657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.371696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.372156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.372199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.372237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.372271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.372306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.372344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.372373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.372408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.372449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.372489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.372524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.372559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.372593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.372631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.372670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.372705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.372741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.372777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.372812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.372848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.372877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.372915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.372950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.372993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.373026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.373068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.373105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.373143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.373181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.373214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.373249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.373283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.373320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.373356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.373383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.373418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.373453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.373478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.373517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.373559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.373598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.373642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.373687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.373729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.373769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.373816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.373852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.373892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.373936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.373975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.374014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.374056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.374102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.374145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.374185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.374228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.374268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.374309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.374351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.374392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.374444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.374486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.374528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.375002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.375049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.375095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.375130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.375165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.375207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.375250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.375285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.375321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.306 [2024-12-05 20:50:52.375358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.375395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.375439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.375466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.375501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.375534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.375571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.375606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.375643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.375677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.375713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.375754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.375799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.375841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.375886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.375917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.375952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.375989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.376025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.376066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.376103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.376140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.376174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.376211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.376244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.376279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.376315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.376350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.376387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.376425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.376458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.376506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.376547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.376588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.376632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.376671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.376714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.376757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.376799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.376839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.376881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.376925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.376963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.377003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.377047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.377098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.377140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.377182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.377224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.377262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.377306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.377350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.377389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.377430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.377477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.378199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.378237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.378272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.378312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.378349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.378393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.378435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.378476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.378511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.378545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.378574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.378609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.378647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.378682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.378730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.378772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.378808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.378842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.378880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.307 [2024-12-05 20:50:52.378918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.378964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.379008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.379037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.379077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.379109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.379148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.379180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.379221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.379257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.379294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.379333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.379370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.379404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.379436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.379471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.379506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.379544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.379577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.379605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.379642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.379676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.379705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.379750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.379791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.379831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.379870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.379909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.379953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.379994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.380035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.380087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.380128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.380170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.380215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.380254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.380298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.380341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.380386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.380425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.380466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.380509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.380555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.380600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.380641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.380808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.380848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.380887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.380936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.380975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.381015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.381063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.381105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.381149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.381194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.381230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.381276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.381311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.381350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.381392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.381430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.381794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.381837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.381871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.381913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.381952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.381989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.382024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.382078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.382112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.382144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.382177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.382213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.382256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.382297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.382333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.382371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.382410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.382448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.382485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.308 [2024-12-05 20:50:52.382523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.382561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.382597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.382632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.382665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.382709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.382755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.382797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.382840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.382887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.382931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.382971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.383013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.383054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.383098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.383147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.383188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.383228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.383267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.383315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.383360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.383400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.383444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.383482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.383522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.383567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.383610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.383652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.383696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.383741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.383784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.383826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.383870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.383916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.383957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.384001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.384047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.384095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.384138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.384178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.384219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.384263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.384299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.384327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.384366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.384521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.384559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.384594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.384636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.384673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.384705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.384744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.384774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.384812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.384846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.384883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.384934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.384973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.385010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.385048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.385097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.385136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.385174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.385208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.385236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.385271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.385303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.385340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.385377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.385416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.385450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.385485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.385522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.385557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.385595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.385634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.385669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.385704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.385740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.385775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.385802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.385839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.385880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.385910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.385954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.385995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.386036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.386084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.386126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.309 [2024-12-05 20:50:52.386166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.386209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.386256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.386950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.386994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.387045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.387092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.387131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.387189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.387227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.387269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.387312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.387352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.387394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.387439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.387479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.387521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.387560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.387599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.387641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.387674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.387711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.387746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.387786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.387825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.387863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.387894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.387930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.387963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.388006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.388044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.388090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.388129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.388165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.388201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.388236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.388274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.388312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.388341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.388389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.388426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.388464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.388500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.388538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.388574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.388612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.388644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.388681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.388722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.388757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.388796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.388834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.388874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.388907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.388954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.388995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.389036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.389082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.389120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.389160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.389215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.389256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.389300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.389345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.389386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.389428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.389471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.389635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.389683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.389729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.389772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.389812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.389853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.389897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.389941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.389981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.390024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.390076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.390119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.390158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.390200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.390246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.390286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.390654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.390704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.390740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.390781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.390819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.390857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.390892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.390927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.390967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.310 [2024-12-05 20:50:52.391007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.391042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.391082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.391122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.391160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.391194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.391227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.391262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.391299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.391340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.391380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.391417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.391450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.391486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.391520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.391558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.391596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.391631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.391667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.391702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.391737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.391772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.391813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.391852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.391887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.391926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.391961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.392001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.392029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.392067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.392106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.392136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.392174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.392215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.392258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.392304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.392342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.392386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.392427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.392468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.392509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.392557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.392602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.392643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.392685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.392729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.392766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.392808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.392849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.392891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.392935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.392993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.393033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.393079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.393121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.393297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.393337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.393376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.393416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.393456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.393497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.393541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.393581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.393623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.393672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.393712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.393757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.393792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.393828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.393865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.393902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.393936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.393972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.394016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.394054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.394100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.394136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.394164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.394200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.394235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.394267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.394311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.394351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.394384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.394419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.394456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.394491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.394526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.394560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.394593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.394626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.394660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.394695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.311 [2024-12-05 20:50:52.394732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.394766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.394806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.394844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.394881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.394916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.394951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.394986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.395022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.395519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.395569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.395610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.395652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.395701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.395743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.395785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.395826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.395868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.395908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.395947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.396007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.396049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.396098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.396141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.396177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.396218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.396274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.396313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.396357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.396397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.396439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.396484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.396525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.396570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.396609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.396653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.396702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.396738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.396779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.396810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.396844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.396878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.396912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.396955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.396990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.397024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.397066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.397104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.397140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.397174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.397211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.397252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.397280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.397313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.397349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.397387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.397428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.397461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.397497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.397533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.397571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.397613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.397648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.397685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.397715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.397752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.397786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.397820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.397857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.397892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.397927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.397966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.398002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.398170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.398208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:59.312 [2024-12-05 20:50:52.398249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.398279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.398313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.312 [2024-12-05 20:50:52.398347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.398383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.398423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.398461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.398507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.398547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.398589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.398636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.398678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.398720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.398762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.399361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.399409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.399453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.399496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.399547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.399590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.399633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.399676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.399718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.399757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.399798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.399839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.399877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.399919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.399961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.400004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.400043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.400088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.400125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.400172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.400209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.400243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.400279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.400316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.400361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.400402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.400436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.400472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.400505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.400541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.400579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.400620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.400661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.400700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.400736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.400773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.400812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.400848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.400878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.400914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.400949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.400982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.401017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.401054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.401092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.401128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.401167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.401207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.401239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.401275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.401313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.401349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.401387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.401427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.401467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.401508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.401551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.401600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.401638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.401678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.401722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.401770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.401811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.401851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.402014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.402061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.402101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.402143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.402182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.402226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.402266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.402309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.402354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.402395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.402434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.402480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.402519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.402559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.402605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.402647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.402689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.402731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.313 [2024-12-05 20:50:52.402769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.402813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.402852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.402894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.402941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.402981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.403018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.403056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.403097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.403130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.403168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.403205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.403241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.403275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.403314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.403360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.403398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.403435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.403471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.403502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.403539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.403574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.403614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.403652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.403691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.403730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.403767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.403803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.403846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.404643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.404684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.404726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.404764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.404802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.404841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.404886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.404927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.404974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.405025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.405074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.405116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.405157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.405197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.405235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.405279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.405315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.405357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.405402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.405446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.405483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.405523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.405573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.405612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.405654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.405705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.405741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.405783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.405842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.405882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.405922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.405977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.406016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.406055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.406104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.406143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.406185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.406227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.406266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.406304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.406346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.406385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.406423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.406458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.406497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.406532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.406565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.406602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.406638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.406673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.406720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.406757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.406789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.406819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.406855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.406892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.406926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.406966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.407017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.407061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.407100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.407137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.407173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.407218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.314 [2024-12-05 20:50:52.407394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.407433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.407471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.407504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.407540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.407577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.407612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.407647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.407684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.407716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.407752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.407785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.407822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.407858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.407905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.407946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.407990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.408035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.408083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.408125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.408168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.408208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.408253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.408293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.408332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.408371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.408413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.408451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.408493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.408534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.408575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.408615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.408657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.408699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.408740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.408785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.408827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.408869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.408912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.408952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.408992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.409034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.409078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.409119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.409162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.409206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.409246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.409286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.409330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.409368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.409412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.409441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.409478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.409513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.409547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.409586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.410033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.410088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.410126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.410165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.410203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.410239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.410272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.410310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.410345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.410373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.410408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.410439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.410476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.410510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.410544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.410584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.410618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.410656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.410694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.410729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.410764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.410800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.410839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.410878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.410913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.410945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.410979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.411013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.411048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.411096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.411135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.411177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.411223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.411263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.315 [2024-12-05 20:50:52.411306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.411350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.411400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.411440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.411480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.411533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.411576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.411616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.411656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.411697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.411740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.411782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.411820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.411864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.411905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.411944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.411985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.412026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.412067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.412110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.412151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.412192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.412235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.412271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.412313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.412357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.412394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.412439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.412486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.412524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.412684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.412723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.412760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.412801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.412843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.412878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.412917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.413478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.413521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.413558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.413596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.413632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.413667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.413700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.413738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.413771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.413809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.413842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.413878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.413916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.413953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.413987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.414023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.414073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.414116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.414158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.414215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.414256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.414299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.414345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.414386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.414429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.414470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.414524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.414563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.414606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.414656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.414694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.414733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.414775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.414812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.414852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.414893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.414936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.414979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.415020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.415062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.415110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.415148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.415191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.415236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.415279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.415317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.415358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.415401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.415445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.415490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.415533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.415567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.415607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.415653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.415690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.415723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.316 [2024-12-05 20:50:52.415758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.415795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.415831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.415868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.415909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.415946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.415981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.416019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.416174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.416212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.416246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.416282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.416318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.416353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.416394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.416436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.416471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.416507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.416542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.416577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.416611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.416648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.416679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.416716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.416754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.416790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.416829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.416867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.416901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.416936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.416972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.417009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.417044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.417091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.417496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.417541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.417585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.417626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.417667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.417707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.417758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.417796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.417840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.417880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.417921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.417965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.418012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.418056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.418103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.418145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.418184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.418224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.418268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.418312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.418356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.418397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.418439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.418481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.418523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.418565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.418605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.418658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.418702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.418743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.418784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.418822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.418859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.418900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.418938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.418984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.419022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.419062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.419102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.419142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.419178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.419216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.419251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.419281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.419318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.419353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.419387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.419423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.317 [2024-12-05 20:50:52.419457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.419494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.419530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.419576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.419622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.419658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.419694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.419722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.419755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.419789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.419828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.419864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.419903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.419942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.419980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.420016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.420185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.420219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.420256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.420295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.420337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.420378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.420424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.420470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.420512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.420556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.420605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.420644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.420687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.420732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.420772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.420810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.420859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.420899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.420941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.420987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.421028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.421072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.421111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.421151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.421194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.421241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.421283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.421323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.421368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.421414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.421453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.421493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.421532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.421572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.421619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.421664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.421705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.422365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.422403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.422437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.422468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.422513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.422561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.422601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.422638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.422672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.422707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.422746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.422780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.422823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.422865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.422893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.422926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.422958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.422997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.423033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.423075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.423112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.423147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.423184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.423222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.423257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.423293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.423331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.423366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.423403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.423431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.423466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.423501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.423530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.423568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.423607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.423651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.423694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.423732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.423771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.423812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.318 [2024-12-05 20:50:52.423857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.423899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.423944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.423988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.424029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.424077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.424123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.424165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.424209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.424254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.424296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.424338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.424388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.424428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.424470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.424516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.424558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.424596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.424641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.424682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.424723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.424764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.424816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.424854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.425019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.425066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.425110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.425148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.425187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.425223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.425260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.425296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.425331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.425367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.425403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.425445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.425488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.425523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.425555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.425593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.425641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.425676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.425711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.425744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.425781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.425825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.425859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.425895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.425938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.425969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.426003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.426037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.426077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.426115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.426151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.426190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.426227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.426260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.426294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.426329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.426362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.426397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.426433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.426468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.426506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.426546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.426589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.427009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.427055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.427104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.427146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.427189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.427231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.427271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.427311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.427351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.427394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.427433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.427474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.427516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.427561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.427604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.427645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.427690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.427732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.427775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.427815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.427854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.427894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.427936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.427976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.428020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.428065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.428104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.319 [2024-12-05 20:50:52.428133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.428175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.428211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.428244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.428282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.428317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.428359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.428403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.428442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.428478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.428520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.428560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.428599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.428627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.428661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.428694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.428729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.428770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.428806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.428843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.428879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.428912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.428945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.428991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.429029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.429063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.429098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.429127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.429162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.429196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.429232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.429269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.429304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.429338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.429375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.429410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.429446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.429607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.429639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.429675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.429713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.429753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.429794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.429838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.429879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.429918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.429960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.430001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.430045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.430093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.430135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.430175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.430221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.430265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.430309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.430352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.430403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.431045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.431095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.431136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.431180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.431222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.431262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.431305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.431347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.431394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.431429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.431464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.431502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.431535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.431579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.431617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.431652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.431690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.431717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.431753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.431785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.431818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.431852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.431893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.431939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.431979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.432018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.432054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.432100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.432136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.432164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.432201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.432234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.432273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.432310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.432348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.432385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.320 [2024-12-05 20:50:52.432423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.432461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.432496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.432530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.432566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.432598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.432631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.432667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.432704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.432751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.432791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.432833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.432880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.432920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.432961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.433002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.433042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.433090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.433137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.433177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.433217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.433262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.433301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.433344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.433387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.433427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.433471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.433511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.433682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.433726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.433766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.433810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.433852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.433892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.433941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.433978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.434020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.434072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.434112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.434155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.434200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.434239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.434280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.434330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.434367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.434400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.434437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.434472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.434508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.434543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.434582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.434628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.434663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.434698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.434730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.434765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.434802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.434839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.434871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.434910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.434946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.434987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.435027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.435067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.435110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.435151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.435186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.435223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.435257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.435289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.435317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.435353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.435392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.435429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.435467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.435502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.435538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.435577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.435612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.435647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.435690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.435727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.435762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.435800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.435834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.435861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.435895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.435933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.435969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.321 [2024-12-05 20:50:52.436006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.436050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.436521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.436570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.436611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.436653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.436695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.436736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.436781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.436824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.436863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.436908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.436948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.436989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.437042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.437087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.437128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.437172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.437210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.437259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.437300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.437339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.437388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.437426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.437471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.437511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.437552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.437602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.437645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.437688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.437727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.437763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.437800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.437837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.437874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.437912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.437951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.437988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.438015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.438051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.438089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.438124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.438157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.438195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.438240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.438275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.438312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.438347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.438388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.438435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.438463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.438501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.438535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.438572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.438609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.438640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.438671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.438711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.438747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.438781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.438821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.438857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.438891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.438923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.438959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.438990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.439714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.439759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.439801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.439843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.439896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.439935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.439976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.440019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.440071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.440114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.440156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.440199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.440238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.440278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.440319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.440360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.440401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.440443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.440485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.322 [2024-12-05 20:50:52.440525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.440567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.440598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.440631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.440665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.440700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.440735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.440769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.440805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.440845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.440884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.440920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.440958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.440994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.441034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.441069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.441103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.441139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.441180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.441215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.441252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.441287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.441322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.441357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.441402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.441451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.441488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.441527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.441556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.441592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.441627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.441664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.441704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.441742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.441779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.441816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.441850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.441888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.441926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.441966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.442002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.442036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.442075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.442104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.442138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.442298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.442339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.442381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.442419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.442460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.442501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.442540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.442581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.442621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.442663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.442703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.442746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:59.323 [2024-12-05 20:50:52.442787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.442826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.442868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.442907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.443284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.443334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.443376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.443417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.443464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.443503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.443544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.443584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.443624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.443668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.443712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.443752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.443791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.443827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.443863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.443905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.443943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.443980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.444018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.444064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.444103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.444140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.444170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.444203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.444238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.444273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.444310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.444346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.444380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.444415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.444449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.444492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.323 [2024-12-05 20:50:52.444532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.444567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.444602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.444629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.444666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.444701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.444735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.444772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.444802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.444841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.444878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.444910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.444944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.444982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.445021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.445062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.445097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.445131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.445163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.445204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.445243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.445285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.445328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.445371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.445414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.445460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.445502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.445540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.445584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.445627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.445670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.445707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.445869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.445914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.445954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.445994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.446035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.446086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.446126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.446167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.446215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.446255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.446298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.446340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.446379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.446427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.446469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.446512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.446554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.446593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.446638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.446680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.446723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.446766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.446797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.446832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.446867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.446901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.446941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.446978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.447019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.447053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.447095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.447131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.447166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.447202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.447237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.447274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.447301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.447336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.447370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.447407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.447446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.447487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.447524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.447563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.447597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.447640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.447679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.448493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.448533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.448572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.448605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.448641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.448678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.448728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.448770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.448811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.448855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.448893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.448937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.448975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.449015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.324 [2024-12-05 20:50:52.449055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.449109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.449153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.449197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.449238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.449280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.449326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.449364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.449413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.449454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.449502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.449542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.449581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.449626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.449666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.449712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.449747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.449790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.449829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.449871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.449914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.449955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.450001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.450040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.450089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.450132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.450174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.450216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.450255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.450300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.450338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.450378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.450414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.450454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.450502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.450540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.450581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.450621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.450669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.450710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.450742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.450775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.450816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.450857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.450893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.450931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.450966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.451006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.451041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.451082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.451223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.451259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.451293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.451329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.451364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.451398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.451435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.451476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.451508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.451543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.451581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.451618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.451655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.451693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.451726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.451765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.451805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.451846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.451887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.451928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.451971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.452015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.452062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.452102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.452144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.452182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.452223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.452261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.452304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.452362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.452402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.452443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.452496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.452540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.452581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.452624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.452669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.452711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.452755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.452799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.452842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.452882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.452924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.325 [2024-12-05 20:50:52.452968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.453009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.453054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.453101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.453142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.453183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.453225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.453266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.453310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.453356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.453395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.453425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.453458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.453493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.453526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.453561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.453607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.453647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.453685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.453718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.454170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.454211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.454256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.454296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.454334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.454361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.454399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.454431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.454470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.454507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.454544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.454580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.454619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.454656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.454690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.454723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.454758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.454792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.454826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.454862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.454900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.454927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.454961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.454996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.455024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.455064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.455106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.455144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.455184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.455227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.455265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.455305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.455353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.455393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.455435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.455480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.455522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.455562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.455602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.455661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.455700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.455740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.455784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.455822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.455865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.455919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.455959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.456002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.456041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.456090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.456130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.456175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.456217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.456258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.456301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.456343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.456385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.456430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.456471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.326 [2024-12-05 20:50:52.456510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.456551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.456598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.456641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.456677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.457114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.457155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.457192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.457227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.457264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.457301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.457335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.457378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.457412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.457444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.457473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.457510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.457545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.457581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.457617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.457656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.457696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.457730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.457769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.457804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.457843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.457879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.457916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.457954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.457988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.458029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.458076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.458117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.458157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.458199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.458239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.458282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.458328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.458373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.458417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.458458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.458500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.458540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.458582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.458635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.458674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.458718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.458771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.458810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.458851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.458904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.458945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.458988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.459029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.459079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.459123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.459164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.459209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.459256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.459295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.459337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.459382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.459424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.459465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.459511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.459551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.459592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.459638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.460096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.460146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.460179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.460216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.460249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.460290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.460328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.460364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.460401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.460436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.460472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.460507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.460550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.460593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.460628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.460664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.460697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.460733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.460769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.460807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.460843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.460878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.327 [2024-12-05 20:50:52.460914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.460952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.460986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.461023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.461062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.461103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.461130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.461167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.461202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.461235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.461272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.461310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.461343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.461374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.461414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.461457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.461506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.461544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.461583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.461626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.461665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.461709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.461750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.461789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.461829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.461870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.461910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.461953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.461995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.462036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.462081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.462127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.462171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.462212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.462267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.462311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.462349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.462391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.462431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.462471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.462515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.462555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.463019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.463068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.463109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.463144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.463172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.463207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.463248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.463281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.463317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.463361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.463397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.463434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.463469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.463508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.463552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.463592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.463626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.463656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.463691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.463728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.463763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.463807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.463842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.463875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.463912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.463947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.463983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.464018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.464056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.464099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.464135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.464170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.464207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.464242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.464274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.464317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.464356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.464400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.464439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.464481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.464532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.464571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.464615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.464659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.464703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.464744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.464784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.464823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.328 [2024-12-05 20:50:52.464869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.464910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.464955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.464995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.465038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.465088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.465130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.465173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.465214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.465251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.465292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.465336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.465380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.465423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.465467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.465916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.465958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.465997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.466035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.466079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.466117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.466158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.466206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.466251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.466290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.466327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.466363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.466398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.466427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.466464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.466498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.466535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.466570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.466612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.466650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.466686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.466722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.466760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.466802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.466838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.466866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.466900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.466931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.466967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.467005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.467041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.467081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.467117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.467155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.467193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.467226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.467263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.467299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.467335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.467370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.467408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.467435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.467470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.467507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.467537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.467571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.467613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.467652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.467690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.467738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.467776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.467819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.467861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.467905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.467945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.467986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.468029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.468078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.468118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.468160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.468204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.468244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.468284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.468325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.469071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.469111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.469149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.469184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.469219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.469255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.469295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.469338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.469375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.469413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.469449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.469479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.329 [2024-12-05 20:50:52.469514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.469556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.469592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.469636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.469670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.469709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.469744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.469784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.469817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.469852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.469887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.469919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.469949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.469983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.470022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.470063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.470097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.470135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.470170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.470206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.470241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.470272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.470307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.470345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.470382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.470415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.470453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.470490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.470528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.470570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.470616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.470656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.470694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.470737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.470784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.470825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.470868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.470911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.470950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.470994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.471039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.471083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.471127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.471165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.471205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.471247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.471287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.471328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.471375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.471415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.471457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.471511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.471681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.471728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.471771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.471815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.471861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.471899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.471944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.471984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.472022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.472074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.472104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.472140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.472175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.472210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.472254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.472292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.472325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.472362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.472398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.472433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.472473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.472511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.472549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.472588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.472629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.472665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.472701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.472739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.472778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.472818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.472857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.472892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.472928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.472965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.473003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.473031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.473071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.473103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.473143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.473181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.330 [2024-12-05 20:50:52.473221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.473254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.473291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.473326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.473362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.473399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.473437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.473471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.473506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.473539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.473576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.473602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.473637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.473675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.473706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.473744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.473785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.473836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.473875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.473918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.473968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.474006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.474049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.474513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.474559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.474610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.474654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.474693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.474738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.474784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.474827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.474875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.474917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.474957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.475001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.475043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.475091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.475133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.475173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.475214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.475255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.475297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.475337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.475374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.475410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.475447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.475488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.475527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.475562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.475599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.475644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.475682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.475712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.475749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.475781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.475815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.475850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.475890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.475932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.475967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.476003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.476039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.476080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.476116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.476145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.476184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.476218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.476256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.476293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.476331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.476369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.476406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.476443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.476477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.476513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.476550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.476586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.476621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.331 [2024-12-05 20:50:52.476659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.476696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.476741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.476781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.476823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.476870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.476910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.476953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.476996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.477453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.477508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.477549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.477589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.477634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.477674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.477716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.477757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.477800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.477844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.477884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.477924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.477965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.478009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.478052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.478101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.478141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.478192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.478233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.478273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.478311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.478339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.478371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.478406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.478438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.478476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.478515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.478556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.478592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.478627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.478669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.478709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.478747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.478784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.478814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.478846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.478879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.478915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.478953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.478989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.479030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.479075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.479120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.479165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.479199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.479237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.479268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.479305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.479342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.479379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.479412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.479449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.479486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.479519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.479553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.479591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.479624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.479663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.479701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.479737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.479772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.479799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.479838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.480290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.480336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.480377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.480417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.480471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.480509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.480550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.480594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.480634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.480676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.480718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.480769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.480810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.480852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.480904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.480945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.480986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.481031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.332 [2024-12-05 20:50:52.481082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.481124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.481165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.481219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.481261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.481302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.481345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.481386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.481425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.481466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.481506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.481540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.481577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.481614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.481650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.481686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.481722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.481757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.481800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.481842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.481878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.481910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.481949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.481987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.482029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.482076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.482116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.482154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.482189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.482225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.482260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.482295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.482326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.482358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.482390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.482427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.482460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.482498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.482538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.482570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.482604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.482641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.482681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.482721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.482757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.482792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.483516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.483567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.483609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.483649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.483689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.483729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.483772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.483811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.483851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.483891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.483931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.483971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.484009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.484053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.484105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.484148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.484192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.484232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.484274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.484315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.484367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.484405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.484449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.484489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.484516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.484551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.484589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.484624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.484661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.484709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.484744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.484783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.484819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.484855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.484896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.484937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.484972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.485000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.485038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.485079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.485114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.485154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.485195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.485234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.485271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.333 [2024-12-05 20:50:52.485307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.485345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.485388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.485421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.485452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.485489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.485527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.485564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.485605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.485640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.485675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.485711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.485747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.485781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.485815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.485849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.485885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.485933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.485968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.486153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.486197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.486242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.486285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.486321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.486368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.486409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.486450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.486490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.486536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.486579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.486621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.486658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.486708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.486748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.486788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.487174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.487218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.487257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.487300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.487350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.487395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.487436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.487477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.487519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.487560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.487607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.487645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.487685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.487728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.487765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.487813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.487852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.487887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.487922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.487959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.488003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.488044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.488086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.488116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.488146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.488181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.488218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.488262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.488304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.488340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.488373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.488407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.488443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.488480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.488520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.488554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.488584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.488618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.488656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.488695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.488732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.488770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.488803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.488838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.488871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.488910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.488944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.488982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.489020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.489055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.489104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.489145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.489184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.489226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.489265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.489319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.334 [2024-12-05 20:50:52.489363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.489403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.489444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.489484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.489525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.489582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.489622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.489662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.489825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.489868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.489913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.489957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.490000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.490042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 Message suppressed 999 times: [2024-12-05 20:50:52.490090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 Read completed with error (sct=0, sc=15) 00:30:59.335 [2024-12-05 20:50:52.490136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.490175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.490223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.490264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.490308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.490355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.490395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.490434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.490476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.490520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.490561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.490603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.490643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.490681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.490727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.490760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.490793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.490829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.490867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.490906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.490949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.490987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.491022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.491065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.491103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.491137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.491172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.491208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.491237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.491272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.491307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.491352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.491389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.491428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.491463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.491500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.491535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.491576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.491618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.491652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.492447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.492491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.492531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.492570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.492618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.492655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.492696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.492741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.492783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.492827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.492872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.492909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.492954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.493002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.493045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.493089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.493132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.493172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.493211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.493254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.493307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.493346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.493389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.493432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.493473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.493512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.493551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.493602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.493647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.493688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.493732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.493770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.493812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.493855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.493899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.493940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.493981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.335 [2024-12-05 20:50:52.494023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.494067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.494107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.494145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.494189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.494226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.494259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.494295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.494332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.494370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.494403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.494441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.494475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.494503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.494537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.494574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.494607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.494648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.494687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.494729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.494765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.494797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.494832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.494870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.494904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.494933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.494968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.495138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.495178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.495216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.495252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.495290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.495326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.495360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.495399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.495437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.495470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.495513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.495561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.495600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.495643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.495696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.495736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.495778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.495830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.495872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.495913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.495957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.495998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.496040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.496084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.496126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.496168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.496220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.496259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.496299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.496345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.496388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.496429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.496479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.496520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.496561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.496603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.496647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.496689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.496732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.496772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.496812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.496856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.496901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.496942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.496984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.497031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.497075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.497117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.497149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.497184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.497220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.497255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.497289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.336 [2024-12-05 20:50:52.497331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.497364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.497400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.497437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.497476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.497515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.497551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.497584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.497612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.497646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.498071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.498110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.498149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.498183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.498220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.498257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.498293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.498328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.498363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.498399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.498431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.498466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.498500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.498534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.498569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.498597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.498637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.498676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.498708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.498751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.498788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.498836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.498875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.498914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.498955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.498998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.499040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.499082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.499130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.499170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.499214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.499257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.499297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.499344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.499387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.499439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.499483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.499523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.499564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.499608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.499646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.499690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.499732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.499771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.499815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.499862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.499904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.499946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.499988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.500030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.500075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.500117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.500161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.500199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.500242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.500291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.500329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.500366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.500402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.500438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.500471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.500510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.500549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.500586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.501015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.501054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.501098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.501127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.501165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.501202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.501239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.501279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.501313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.501344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.501380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.501419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.501454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.501489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.501525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.501560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.501597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.337 [2024-12-05 20:50:52.501633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.501672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.501712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.501752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.501796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.501838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.501877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.501921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.501964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.502003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.502041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.502087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.502126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.502167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.502215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.502258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.502300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.502348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.502387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.502428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.502472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.502515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.502556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.502601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.502643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.502682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.502726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.502767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.502807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.502851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.502891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.502931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.502977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.503023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.503067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.503109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.503148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.503189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.503237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.503273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.503302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.503337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.503371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.503403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.503437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.503471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.503927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.503971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.504012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.504050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.504101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.504138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.504173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.504209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.504239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.504278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.504315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.504354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.504393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.504426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.504461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.504494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.504533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.504564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.504602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.504638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.504675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.504710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.504744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.504772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.504808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.504845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.504879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.504916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.504955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.504999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.505043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.505090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.505132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.505175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.505221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.505261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.505301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.505346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.505385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.505426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.505466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.505507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.505552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.505590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.505633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.338 [2024-12-05 20:50:52.505675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.505718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.505760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.505803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.505843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.505885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.505938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.505981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.506022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.506068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.506111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.506151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.506192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.506233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.506275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.506318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.506354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.506394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.506437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.506880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.506919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.506957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.506993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.507031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.507072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.507111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.507148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.507183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.507226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.507266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.507301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.507335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.507370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.507406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.507443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.507479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.507511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.507546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.507585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.507620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.507656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.507693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.507725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.507767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.507806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.507853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.507895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.507937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.507979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.508019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.508064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.508105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.508144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.508185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.508230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.508277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.508318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.508360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.508419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.508459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.508502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.508548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.508593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.508636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.508679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.508722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.508763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.508805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.508848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.508890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.508932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.508977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.509021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.509063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.509116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.509156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.509197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.509246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.509286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.509329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.509373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.509409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.509859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.509900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.509934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.509970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.510007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.510043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.510081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.510117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.510153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.510199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.510242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.339 [2024-12-05 20:50:52.510278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.510315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.510349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.510388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.510421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.510461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.510497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.510532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.510569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.510609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.510644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.510677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.510713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.510744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.510773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.510811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.510838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.510865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.510907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.510935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.510978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.511020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.511068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.511111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.511153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.511196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.511234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.511275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.511317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.511357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.511398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.511439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.511479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.511521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.511563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.511606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.511645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.511688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.511729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.511769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.511807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.511842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.511878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.511915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.511951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.511990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.512030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.512071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.512112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.512141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.512174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.512206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.512241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.512771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.512814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.512854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.512905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.512946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.512988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.513031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.513081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.513123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.513169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.513212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.513252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.513298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.513338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.513377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.513419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.513470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.513511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.513554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.513600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.513642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.513684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.513729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.513765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.513810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.513850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.513892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.513936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.513977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.514021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.514066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.514106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.514152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.514200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.514240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.514282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.514321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.514360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.514403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.514445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.514488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.340 [2024-12-05 20:50:52.514530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.514570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.514612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.514653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.514695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.514739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.514785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.514827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.514868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.514904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.514932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.514965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.515004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.515037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.515079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.515117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.515151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.515188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.515222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.515258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.515300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.515343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.515787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.515828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.515861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.515890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.515923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.515960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.515996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.516029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.516068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.516106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.516142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.516181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.516217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.516251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.516292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.516328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.516357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.516391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.516433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.516471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.516508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.516544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.516580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.516624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.516661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.516698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.516735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.516765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.516809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.516851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.516895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.516935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.516975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.517017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.517061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.517103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.517143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.517186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.517225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.517269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.517320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.517361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.517401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.517442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.517483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.517537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.517579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.517620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.517663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.517705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.517745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.517795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.517835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.517874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.517924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.517965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.518007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.518049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.518089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.341 [2024-12-05 20:50:52.518130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.518165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.518192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.518226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.518263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.519007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.519049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.519089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.519125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.519160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.519198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.519238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.519279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.519331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.519372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.519414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.519462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.519503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.519546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.519590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.519627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.519667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.519709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.519751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.519795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.519840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.519884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.519923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.519966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.520010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.520048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.520096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.520137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.520190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.520233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.520275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.520321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.520361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.520404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.520446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.520499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.520539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.520579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.520623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.520652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.520684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.520719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.520756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.520793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.520829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.520864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.520898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.520940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.520975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.521009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.521047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.521085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.521116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.521153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.521184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.521222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.521256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.521300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.521342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.521379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.521417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.521460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.521489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.521677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.521718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.521755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.521790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.521822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.521858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.521897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.521939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.521980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.522022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.522069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.522112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.522156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.522202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.522247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.522287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.522340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.522382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.522422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.522463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.522503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.522542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.522587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.522631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.342 [2024-12-05 20:50:52.522670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.522713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.522753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.522792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.522835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.522889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.522930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.522972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.523015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.523063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.523108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.523159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.523197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.523240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.523288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.523330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.523372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.523423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.523464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.523511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.523552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.523592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.523632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.523673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.523715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.523754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.523790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.523817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.523851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.523891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.523928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.523962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.523996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.524030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.524073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.524110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.524145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.524189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.524229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.524261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.524991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.525033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.525075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.525114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.525150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.525186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.525221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.525257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.525294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.525335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.525362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.525395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.525435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.525464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.525499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.525540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.525581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.525635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.525675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.525721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.525766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.525812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.525854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.525898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.525940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.525984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.526028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.526073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.526114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.526159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.526199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.526237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.526281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.526330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.526368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.526412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.526450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.526494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.526538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.526581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.526627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.526667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.526720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.526759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.526799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.526846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.526890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.526930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.526969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.527005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.527040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.527080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.527119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.527157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.527191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.527229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.527265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.527300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.527329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.527368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.527404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.527442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.343 [2024-12-05 20:50:52.527473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.527646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.527682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.527715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.527748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.527790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.527828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.527861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.527898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.527936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.527974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.528008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.528045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.528086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.528124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.528159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.528193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.528238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.528276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.528325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.528365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.528404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.528448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.528491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.528531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.528572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.528622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.528670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.528713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.528755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.528804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.528840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.528879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.528925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.528965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.529003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.529053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.529099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.529140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.529185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.529227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.529269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.529310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.529352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.529394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.529448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.529490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.529531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.529573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.529619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.529662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.529707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.529747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.529787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.529828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.529871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.529910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.529949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.529978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.530015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.530052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.530090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.530124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.530160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.530201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.530941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.530979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.531014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.531052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.531093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.531130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.531168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.531204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.531242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.531279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.531312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.531354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.531390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.531426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.531455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.531487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.531524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.531558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.531590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.531636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.531676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.531718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.531759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.531802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.531847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.531887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.531926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.531969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.532010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.532055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.532103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.532146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.532187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.532227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.532268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.344 [2024-12-05 20:50:52.532312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.345 [2024-12-05 20:50:52.532351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.345 [2024-12-05 20:50:52.532393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.345 [2024-12-05 20:50:52.532443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.345 [2024-12-05 20:50:52.532483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.345 [2024-12-05 20:50:52.532526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.345 [2024-12-05 20:50:52.532575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.345 [2024-12-05 20:50:52.532614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.345 [2024-12-05 20:50:52.532655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.345 [2024-12-05 20:50:52.532698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.345 [2024-12-05 20:50:52.532737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.345 [2024-12-05 20:50:52.532777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.345 [2024-12-05 20:50:52.532822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.345 [2024-12-05 20:50:52.532860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.345 [2024-12-05 20:50:52.532903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.345 [2024-12-05 20:50:52.532948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.345 [2024-12-05 20:50:52.532991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.345 [2024-12-05 20:50:52.533032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.345 [2024-12-05 20:50:52.533083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.345 [2024-12-05 20:50:52.533124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.345 [2024-12-05 20:50:52.533167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.345 [2024-12-05 20:50:52.533199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.345 [2024-12-05 20:50:52.533235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.345 [2024-12-05 20:50:52.533274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.345 [2024-12-05 20:50:52.533309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.345 [2024-12-05 20:50:52.533352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.345 [2024-12-05 20:50:52.533388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.345 [2024-12-05 20:50:52.533426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.345 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:59.345 20:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:59.345 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:59.345 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:59.345 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:59.345 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:59.345 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:59.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:59.642 [2024-12-05 20:50:52.750531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.750593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.750636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.750677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.750712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.750753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.750790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.750837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.750874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.750918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.750962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.751005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.751045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.751096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.751138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.751176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.751213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.751255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.751294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.751333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.751375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.751410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.751448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.751484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.751522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.751556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.751589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.751627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.751665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.751706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.751733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.751768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.751800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.751834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.751869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.751906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.642 [2024-12-05 20:50:52.751942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.751978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.752015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.752048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.752084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.752116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.752150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.752190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.752233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.752274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.752313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.752358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.752395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.752433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.752471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.752509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.752549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.752587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.752639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.752680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.752722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.752766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.752807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.752848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.752887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.752928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.752968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.753004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.753174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.753216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.753255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.753294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.753337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.753378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.753420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.754017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.754072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.754112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.754149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.754180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.754216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.754250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.754286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.754325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.754369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.754416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.754458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.754495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.754531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.754565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.754592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.754629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.754661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.754697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.754739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.754774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.754809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.754844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.754883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.754919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.754954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.754986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.755020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.643 [2024-12-05 20:50:52.755050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.755090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.755144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.755184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.755220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.755257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.755295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.755334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.755366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.755401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.755434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.755468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.755503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.755538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.755573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.755607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.755640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.755676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.755712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.755745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.755781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.755819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.755852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.755898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.755941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.755971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.756005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.756039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.756078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.756111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.756139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.756176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.756207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.756242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.756278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.756313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.756471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.756498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.756535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.756575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.756619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.756659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.756696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.756740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.756780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.756822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.756863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.756903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.756945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.756986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.757027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.757068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.757104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.757137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.757169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.757206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.757243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.757278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.757312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.757347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.757383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.757417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.757451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.757485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.757519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.644 [2024-12-05 20:50:52.757553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.757594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.757633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.757670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.757708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.757759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.757794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.757840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.757877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.757913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.757949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.757989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.758032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.758075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.758115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.758156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.758195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.758235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.758272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.758309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.758349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.758387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.758428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.758468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.758512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.758556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.758591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.759074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.759118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.759162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.759209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.759249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.759284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.759319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.759356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.759399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.759427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.759461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.759494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.759530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.759566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.759602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.759637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.759672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.759710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.759744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.759788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.759831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.759873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.759911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.759953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.759993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.760039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.760086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.760126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.760166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.760202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.760248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.760288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.760329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.760370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.760408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.760452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.760486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.760526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.645 [2024-12-05 20:50:52.760562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.760602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.760644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.760684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.760727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.760764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.760808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.760843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.760885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.760922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.760959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.760999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.761038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.761085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.761126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.761168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.761208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.761247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.761280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.761316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.761353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.761384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.761417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.761453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.761492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.761526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.761682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.761718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.761745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.761780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.761816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.761856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.761900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.762556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.762596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.762631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.762667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.762702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.762742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.762782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.762821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.762858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.762895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.762935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.762984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.763022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.763070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.763108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.763150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.763188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.763228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.763267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.763309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.763347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.763387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.763431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.763471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.763512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.763556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.763599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.763638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.763677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.763715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.763763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.763800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.763849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.646 [2024-12-05 20:50:52.763887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.763931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.763969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.764010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.764051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.764098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.764138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.764176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.764233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.764272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.764310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.764345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.764382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.764424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.764462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.764499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.764530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.764560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.764596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.764631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.764668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.764704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.764736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.764771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.764814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.764858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.764892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.764928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.764963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.764995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.765025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.765187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.765222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.765259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.765294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.765330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.765372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.765413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.765452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.765480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.765513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.765547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.765585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.765620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.765655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.765691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.765727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.765760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.765797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.765833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.765862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.765897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.765930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.765966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.765999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.766034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.766065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.766100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.766134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.766159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.766202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.766238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.766278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.766316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.766365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.647 [2024-12-05 20:50:52.766404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.766444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.766482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.766523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.766562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.766601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.766644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.766679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.766724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.766763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.766807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.766851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.766891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.766937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.766976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.767022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.767065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.767107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.767144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.767183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.767229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.767266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.767733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.767777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.767815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.767853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.767889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.767925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.767960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.767995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.768031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.768071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.768101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.768138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.768178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.768212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.768247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.768283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.768322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.768369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.768414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.768453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.768492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.768519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.768557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.768593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.768629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.768667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.768703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.768739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.768776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.768813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.768850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.768888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.768925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.768956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.768994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.769033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.769078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.769118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.769168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.769209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.769249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.769291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.648 [2024-12-05 20:50:52.769332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.769375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.769414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.769453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.769499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.769539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.769582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.769619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.769664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.769707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.769749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.769790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.769841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.769880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.769922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.769962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.770000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.770049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.770096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.770140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.770180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.770220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.770396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.770440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.770478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.770526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.770573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.770615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:59.649 [2024-12-05 20:50:52.770660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.771233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.771276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.771316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.771352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.771390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.771424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.771457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.771491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.771525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.771557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.771590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.771626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.771662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.771697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.771736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.771771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.771807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.771845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.771877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.649 [2024-12-05 20:50:52.771911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.771947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.771984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.772016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.772052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.772090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.772131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.772158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.772189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.772222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.772262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.772296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.772334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.772373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.772414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.772453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.772496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.772537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.772577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.772625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.772664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.772701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.772750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.772791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.772834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.772880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.772918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.772957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.773004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.773044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.773088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.773127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.773167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.773210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.773253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.773293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.773328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.773363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.773398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.773432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.773466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.773503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.773532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.773567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.773604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.773778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.773819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.773857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.773894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.773929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.773965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.773999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.774036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.774084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.774474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.774521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.774560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.774605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.774648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.774690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.774729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.774770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.774808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.774851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.774891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.774932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.774976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.775016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.775064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.650 [2024-12-05 20:50:52.775107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.775157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.775200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.775244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.775289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.775329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.775377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.775420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.775463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.775507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.775550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.775589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.775643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.775683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.775724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.775765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.775804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.775845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.775885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.775928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.775969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.776013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.776064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.776102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.776132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.776169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.776203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.776240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.776286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.776323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.776358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.776393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.776433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.776471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.776506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.776543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.776579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.776607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.776642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.776681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.776717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.776754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.776794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.776833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.776872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.776910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.776946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.776979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.777022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.777216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.777257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.777293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.777325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.777363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.777398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.777434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.777471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.777506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.777543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.777578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.777614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.777652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.777678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.777711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.777747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.777774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.651 [2024-12-05 20:50:52.777809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.777844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.777886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.777927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.777972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.778015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.778065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.778109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.778149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.778189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.778232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.778276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.778314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.778358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.778399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.778437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.778485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.778526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.778567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.778619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.778657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.778701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.778743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.778786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.778827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.778873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.778912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.778952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.778999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.779039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.779084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.779132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.779172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.779215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.779259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.779295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.779332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.779794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.779838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.779878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.779910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.779949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.779985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.780021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.780066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.780105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.780138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.780177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.780215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.780254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.780292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.780324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.780369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.780409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.780450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.780496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.780537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.780580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.780633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.780673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.780718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.780761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.780800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.780842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.780890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.780927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.780969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.781014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.781054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.652 [2024-12-05 20:50:52.781099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.781142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.781181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.781227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.781270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.781310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.781350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.781394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.781435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.781477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.781522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.781563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.781606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.781649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.781688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.781730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.781779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.781820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.781863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.781905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.781945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.782005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.782045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.782091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.782132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.782170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.782210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.782245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.782281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.782325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.782363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.782391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.782544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.782595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.782637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.782676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.782714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.782755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.782793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.782829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.782861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.783441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.783484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.783520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.783558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.783591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.783623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.783655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.783686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.783729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.783754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.783788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.783824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.783863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.783904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.783940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.783985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.784023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.784070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.784112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.784155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.784197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.784240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.784279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.784321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.784371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.784410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.653 [2024-12-05 20:50:52.784448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.784492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.784534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.784574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.784617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.784653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.784691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.784730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.784768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.784802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.784837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.784874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.784908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.784943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.784988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.785037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.785086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.785122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.785151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.785187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.785224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.785265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.785297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.785332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.785368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.785410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.785444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.785481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.785519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.785556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.785597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.785633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.785668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.785708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.785745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.785785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.785823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.785865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.786030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.786084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.786124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.786169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.786219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.786268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.786308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.786347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.786388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.786432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.786475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.786513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.786555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.786599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.786641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.786683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.786727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.786767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.786808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.786850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.786889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.786933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.786976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.787018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.787067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.787113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.787152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.654 [2024-12-05 20:50:52.787193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.787232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.787276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.787318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.787366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.787407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.787449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.787480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.787513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.787550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.787584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.787630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.787673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.787716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.787754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 20:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:59.655 [2024-12-05 20:50:52.787789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.787827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.787861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.787900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.787937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.787973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.788003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.788037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.788080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.788117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 20:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:59.655 [2024-12-05 20:50:52.788151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.788186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.788671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.788716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.788749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.788784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.788822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.788859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.788898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.788935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.788971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.788999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.789034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.789077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.789106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.789146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.789187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.789229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.789272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.789311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.789352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.789396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.789440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.789483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.789527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.789565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.789607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.789651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.789693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.655 [2024-12-05 20:50:52.789737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.789789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.789828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.789870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.789910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.789946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.789988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.790032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.790078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.790119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.790167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.790209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.790248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.790299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.790341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.790381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.790440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.790484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.790527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.790568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.790607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.790648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.790695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.790736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.790773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.790808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.790853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.790894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.790944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.790984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.791017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.791052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.791108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.791147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.791175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.791211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.791247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.791403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.791440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.791482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.791519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.791553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.791581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.791616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.791652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.791693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.792342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.792388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.792433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.792474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.792513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.792556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.792599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.792640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.792684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.792731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.792769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.792815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.792853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.792895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.792938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.792976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.793021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.793072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.793112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.656 [2024-12-05 20:50:52.793152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.793197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.793238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.793278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.793325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.793366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.793409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.793450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.793491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.793532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.793572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.793625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.793668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.793708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.793754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.793799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.793838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.793875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.793905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.793941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.793976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.794015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.794065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.794101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.794135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.794178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.794214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.794253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.794288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.794323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.794365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.794395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.794430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.794463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.794499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.794533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.794572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.794614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.794647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.794685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.794722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.794760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.794798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.794830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.794867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.795045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.795089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.795123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.795160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.795193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.795229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.795263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.795300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.795334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.795372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.795399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.795434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.795474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.795504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.795540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.795580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.795624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.657 [2024-12-05 20:50:52.795660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.795705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.795750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.795792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.795834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.795889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.795933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.795972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.796014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.796054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.796102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.796152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.796194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.796237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.796280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.796324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.796366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.796409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.796448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.796490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.796535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.796573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.796617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.796663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.796703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.796741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.796780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.796824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.796864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.796904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.796946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.796987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.797031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.797078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.797118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.797153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.797190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.797661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.797703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.797741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.797776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.797813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.797861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.797899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.797934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.797961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.797992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.798030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.798075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.798111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.798149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.798183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.798219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.798254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.798290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.798326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.798361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.798395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.798433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.798469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.798511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.798550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.798603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.798641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.798683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.658 [2024-12-05 20:50:52.798725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.798765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.798807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.798849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.798892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.798933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.798979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.799021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.799066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.799111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.799154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.799193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.799239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.799281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.799323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.799368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.799409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.799450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.799495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.799536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.799578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.799622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.799662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.799705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.799753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.799793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.799833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.799874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.799916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.799959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.799999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.800038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.800091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.800135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.800176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.800206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.800370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.800407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.800453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.800503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.800542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.800589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.800635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.800679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.800724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.801283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.801323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.801362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.801404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.801440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.801476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.801517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.801549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.801587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.801622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.801659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.801699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.801736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.801773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.801801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.801839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.801876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.801907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.801946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.659 [2024-12-05 20:50:52.801988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.802033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.802084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.802125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.802169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.802213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.802256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.802291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.802331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.802376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.802418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.802459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.802505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.802545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.802585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.802644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.802684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.802725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.802770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.802811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.802851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.802895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.802936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.802977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.803020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.803063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.803106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.803150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.803189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.803234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.803283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.803320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.803362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.803411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.803449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.803487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.803526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.803568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.803605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.803643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.803680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.803720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.803756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.803796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.803831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.803983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.804026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.804070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.804105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.804140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.804183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.804217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.804254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.804291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.804325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.804357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.804390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.804428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.804463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.804500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.804535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.804575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.804613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.804648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.804679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.804715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.660 [2024-12-05 20:50:52.804750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.804784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.804820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.804857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.804895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.804936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.804981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.805021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.805068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.805110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.805153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.805197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.805235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.805277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.805321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.805360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.805406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.805447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.805484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.805528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.805570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.805611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.805653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.805694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.805735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.805781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.805821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.805865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.805909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.805950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.805992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.806037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.806080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.806560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.806599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.806637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.806673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.806710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.806752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.806786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.806823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.806859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.806895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.806932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.806966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.807000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.807036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.807076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.807116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.807153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.807190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.807226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.807260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.807296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.807339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.807377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.807411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.807446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.807483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.807514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.807548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.807582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.807618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.661 [2024-12-05 20:50:52.807655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.807694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.807729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.807763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.807799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.807833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.807870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.807906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.807942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.807982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.808021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.808065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.808092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.808134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.808173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.808202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.808241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.808282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.808322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.808377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.808415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.808455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.808497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.808537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.808579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.808619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.808661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.808713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.808754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.808796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.808851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.808891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.808931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.808979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.809153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.809195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.809234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.809276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.809332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.809373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.809415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.809458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.809500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.810080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.810123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.810159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.810194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.810222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.810274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.810314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.810351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.810397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.662 [2024-12-05 20:50:52.810439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.810483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.810523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.810559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.810596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.810633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.810665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.810697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.810733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.810769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.810805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.810841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.810881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.810917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.810954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.810986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.811024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.811065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.811104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.811158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.811196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.811234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.811275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.811336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.811390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.811427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.811471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.811515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.811556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.811604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.811645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.811685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.811728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.811774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.811816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.811856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.811913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.811957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.811999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.812044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.812096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.812136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.812181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.812220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.812260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.812316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.812357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.812400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.812440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.812481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.812524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.812563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.812603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.812656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.812697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.812866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.812906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.812952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.812981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.813014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.813052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.813095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.813134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.813175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.813213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.813251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.813288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.813323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.663 [2024-12-05 20:50:52.813366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.813410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.813449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.813487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.813516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.813553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.813591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.813634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.813671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.813706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.813742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.813779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.813821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.813867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.813910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.813945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.813973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.814010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.814042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.814085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.814122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.814156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.814193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.814225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.814260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.814296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.814333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.814368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.814404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.814442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.814481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.814514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.814544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.814579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.814614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.814641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.814686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.814726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.814772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.814815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.814857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.815337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.815385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.815428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.815469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.815512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.815555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.815595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.815637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.815677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.815719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.815760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.815802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.815847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.815887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.815929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.815967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.816019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.816065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.816105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.816150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.816189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.816226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.816267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.816302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.816338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.816372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.664 [2024-12-05 20:50:52.816409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.816450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.816489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.816527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.816564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.816594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.816633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.816667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.816704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.816739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.816777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.816813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.816847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.816884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.816930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.816965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.817000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.817029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.817069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.817103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.817139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.817176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.817211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.817248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.817284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.817323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.817357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.817395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.817431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.817470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.817512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.817550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.817584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.817626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.817668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.817716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.817757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.817800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.817961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:59.665 [2024-12-05 20:50:52.818006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.818046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.818093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.818135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.818183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.818227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.818270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.818312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.818902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.818950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.818994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.819033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.819088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.819127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.819169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.819209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.819239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.819276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.819316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.819349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.819394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.819434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.819474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.819509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.819544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.665 [2024-12-05 20:50:52.819582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.819620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.819658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.819698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.819733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.819761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.819798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.819837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.819875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.819910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.819947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.819980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.820016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.820052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.820099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.820138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.820175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.820204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.820238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.820269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.820305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.820344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.820381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.820415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.820452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.820489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.820525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.820565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.820602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.820637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.820675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.820712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.820747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.820773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.820812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.820848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.820874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.820916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.820958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.820998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.821042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.821087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.821129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.821169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.821224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.821266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.821305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.821476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.821529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.821570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.821612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.821658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.821698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.821740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.821783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.821823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.821865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.821908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.821946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.821987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.822030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.822079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.666 [2024-12-05 20:50:52.822122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.822165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.822216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.822257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.822297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.822336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.822378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.822433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.822473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.822512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.822549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.822585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.822620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.822657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.822697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.822734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.822769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.822806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.822845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.822885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.822916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.822953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.822988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.823026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.823070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.823108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.823144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.823181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.823216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.823257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.823293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.823322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.823353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.823389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.823427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.823460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.823496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.823538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.823575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.824075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.824127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.824167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.824221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.824261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.824303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.824355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.824398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.824440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.824496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.824536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.824580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.824616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.824657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.824698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.824739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.824781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.824824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.824864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.824904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.824944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.824984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.825026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.825075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.825117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.667 [2024-12-05 20:50:52.825161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.825200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.825247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.825289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.825339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.825380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.825420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.825475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.825517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.825556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.825595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.825623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.825657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.825693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.825735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.825775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.825815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.825850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.825886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.825921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.825964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.826002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.826038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.826081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.826110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.826147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.826179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.826212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.826247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.826281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.826313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.826349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.826385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.826421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.826455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.826493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.826527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.826563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.826596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.826770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.826808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.826842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.826877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.826912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.826949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.826985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.827015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.827051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.827380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.827431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.827471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.827510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.827548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.827591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.827632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.827673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.827714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.827755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.827806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.827846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.827885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.827928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.827968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.828011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.668 [2024-12-05 20:50:52.828071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.828112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.828153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.828198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.828246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.828285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.828325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.828373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.828413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.828453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.828506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.828544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.828585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.828626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.828670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.828713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.828753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.828795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.828835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.828871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.828908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.828952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.828988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.829024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.829064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.829102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.829136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.829173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.829208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.829239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.829276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.829312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.829350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.829387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.829421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.829455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.829499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.829534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.830055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.830101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.830138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.830174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.830212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.830249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.830287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.830333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.830373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.830413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.830454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.830494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.669 [2024-12-05 20:50:52.830548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.830590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.830632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.830695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.830737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.830785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.830828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.830870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.830911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.830949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.830986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.831032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.831079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.831119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.831160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.831215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.831258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.831297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.831346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.831390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.831429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.831470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.831510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.831549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.831594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.831634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.831675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.831718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.831755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.831794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.831846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.831889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.831928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.831970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.831997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.832036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.832086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.832121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.832158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.832198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.832242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.832275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.832312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.832349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.832384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.832423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.832466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.832499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.832534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.832569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.832604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.832639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.832799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.832837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.832875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.832909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.832936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.832970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.833000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.833039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.833082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.833706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.833754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.833794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.833845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.833882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.833920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.833964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.834001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.834042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.670 [2024-12-05 20:50:52.834091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.834131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.834170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.834213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.834253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.834292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.834341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.834382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.834424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.834466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.834506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.834546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.834586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.834627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.834668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.834714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.834751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.834792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.834834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.834878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.834919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.834959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.835016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.835067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.835103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.835138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.835175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.835210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.835245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.835281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.835322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.835358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.835392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.835424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.835455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.835485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.835521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.835559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.835595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.835632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.835664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.835697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.835732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.835766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.835806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.835849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.835883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.835920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.835954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.835993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.836027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.836071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.836111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.836144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.836181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.836343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.836378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.836414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.836445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.836491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.836530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.836572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.836610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.836652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.836695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.671 [2024-12-05 20:50:52.836752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.836793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.836835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.836883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.836923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.836964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.837007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.837046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.837089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.837130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.837171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.837217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.837257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.837299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.837328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.837367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.837405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.837436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.837482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.837522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.837555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.837591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.837628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.837656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.837690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.837724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.837759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.837794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.837829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.837863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.837899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.837933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.837969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.838004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.838041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.838081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.838115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.838150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.838187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.838219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.838256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.838294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.838329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.838357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.838831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.838883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.838922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.838962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.839006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.839047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.839095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.839135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.839176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.839217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.839257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.839307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.839346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.839388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.839430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.839471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.839514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.839560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.839606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.839646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.672 [2024-12-05 20:50:52.839688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.839734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.839778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.839820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.839869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.839913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.839956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.840001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.840047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.840102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.840144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.840188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.840238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.840275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.840315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.840359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.840402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.840447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.840490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.840535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.840579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.840620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.840663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.840717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.840759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.840799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.840835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.840873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.840908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.840946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.840983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.841020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.841056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.841099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.841139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.841178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.841207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.841239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.841272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.841305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.841343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.841383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.841422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.841456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.841619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.841648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.841683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.841721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.841755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.841803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.841846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.841886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.841930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.842557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.842601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.842638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.842674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.842713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.842754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.842794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.842836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.842888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.842925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.842970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.843010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.843049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.673 [2024-12-05 20:50:52.843099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.843138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.843180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.843209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.843243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.843282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.843317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.843354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.843391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.843419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.843456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.843495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.843531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.843568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.843600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.843639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.843674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.843713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.843748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.843786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.843825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.843862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.843900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.843938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.843971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.844007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.844034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.844068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.844106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.844149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.844195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.844236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.844278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.844330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.844370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.844411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.844454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.844494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.844537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.844579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.844630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.844670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.844710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.844771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.844816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.844855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.844897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.844942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.844985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.845028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.845075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.845244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.845286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.845328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.845372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.845409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.845447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.845493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.845878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.845918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.845958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.845994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.846031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.846077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.846117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.846154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.846188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.674 [2024-12-05 20:50:52.846225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.846271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.846314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.846350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.846387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.846423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.846459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.846502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.846534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.846568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.846602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.846640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.846675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.846712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.846748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.846787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.846823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.846854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.846892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.846932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.846972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.847012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.847065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.847105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.847145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.847189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.847229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.847275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.847314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.847357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.847398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.847438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.847479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.847523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.847559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.847601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.847648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.847686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.847728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.847773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.847815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.847856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.847897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.847936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.847977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.848020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.848062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.848105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.848143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.848171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.848205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.848246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.848280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.848315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.848350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.848510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.848543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.848574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.848611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.848647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.848687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.848723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.848758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.848796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.848832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.848872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.848915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.848952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.848985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.849028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.849071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.849105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.849145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.849181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.849217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.849252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.849287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.849314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.849353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.675 [2024-12-05 20:50:52.849383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.849425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.849465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.849506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.849544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.849583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.849635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.849677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.849718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.849775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.849815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.849856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.849892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.849932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.849976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.850013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.850048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.850082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.850118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.850154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.850192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.850229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.850267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.850300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.850339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.850375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.850419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.850463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.850503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.850544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.850596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.850638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.851118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.851163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.851209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.851255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.851295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.851337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.851376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.851415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.851456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.851498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.851540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.851582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.851639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.851679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.851721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.851772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.851812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.851858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.851901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.851943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.851988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.852029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.852077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.852118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.852168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.852210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.852252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.852292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.852332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.852387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.852425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.852465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.852508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.852549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.852585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.852625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.852666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.852707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.852734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.852771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.852807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.852844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.852880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.852921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.852955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.852989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.853025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.853065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.853106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.853145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.853184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.853217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.853259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.853298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.853333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.853367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.853401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.853437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.853470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.853505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.853543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.853582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.853624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.853652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.853801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.853842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.853879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.676 [2024-12-05 20:50:52.853918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.853953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.853991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.854029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.854648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.854692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.854730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.854765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.854801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.854834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.854870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.854909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.854960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.854998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.855044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.855091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.855133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.855177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.855217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.855256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.855311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.855351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.855397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.855437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.855477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.855524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.855564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.855605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.855647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.855691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.855733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.855777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.855820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.855866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.855910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.855947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.855986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.856022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.856067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.856104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.856140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.856181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.856216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.856246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.856277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.856310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.856344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.856380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.856425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.856466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.856495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.856527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.856564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.856599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.856639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.856672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.856708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.856747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.856787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.856827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.856872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.856910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.856952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.857007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.857045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.857089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.857136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.857177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.857343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.857382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.857424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.857467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.857509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.857550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.857602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.857641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.857683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.857731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.857769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.857811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.857860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.857903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.857947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.857995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.858035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.858084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.858126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.858170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.858211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.858256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.858299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.677 [2024-12-05 20:50:52.858341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.858382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.858425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.858467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.858510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.858548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.858589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.858633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.858677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.858717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.858759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.858801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.858843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.858883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.858926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.858962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.858996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.859031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.859075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.859116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.859151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.859184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.859227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.859276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.859315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.859350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.859387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.859416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.859453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.859488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.859520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.859554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.859589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.860047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.860095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.860131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.860167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.860201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.860238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.860276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.860316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.860350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.860383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.860422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.860457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.860494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.860527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.860565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.860592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.860631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.860670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.860696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.860733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.860767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.860807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.860848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.860890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.860931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.860973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.861014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.861066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.861105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.861145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.861191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.861232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.861276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.861320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.861360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.861398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.861439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.861491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.861532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.861571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.861617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.861659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.861701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.861743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.861784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.861826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.861877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.861925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.861962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.861998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.862034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.862078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.862117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.862154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.862190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.862218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.862253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.862287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.862320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.862354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.862384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.862419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.862453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.862490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.862650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.862687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.862722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.862756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.862794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.678 [2024-12-05 20:50:52.862829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.862868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:59.679 [2024-12-05 20:50:52.863471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.863521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.863562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.863602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.863642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.863682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.863723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.863764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.863805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.863847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.863885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.863926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.863965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.864009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.864062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.864105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.864149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.864192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.864233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.864275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.864319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.864358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.864397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.864450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.864488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.864522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.864549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.864583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.864621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.864655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.864691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.864725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.864762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.864807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.864850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.864886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.864920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.864956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.864999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.865031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.865070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.865103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.865140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.865183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.865217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.865252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.865285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.865321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.865359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.865399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.865437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.865479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.865516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.865552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.865587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.865627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.865661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.865699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.865734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.865772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.865802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.865836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.865875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.865901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.866076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.866117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.866159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.866204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.866246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.866289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.866333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.866377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.866417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.866460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.866502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.866540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.866585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.866627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.866666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.866710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.866751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.866793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.866842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.866883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.866929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.866969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.867011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.867052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.867095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.867139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.867188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.867226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.679 [2024-12-05 20:50:52.867269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.867313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.867352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.867394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.867432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.867468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.867503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.867540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.867579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.867612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.867652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.867690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.867725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.867753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.867791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.867832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.867872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.867913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.867947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.867985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.868021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.868056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.868093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.868126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.868163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.868198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.868234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.868275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.868783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.868831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.868876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.868916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.868959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.869011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.869052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.869098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.869142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.869182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.869223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.869263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.869305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.869350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.869390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.869429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.869467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.869515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.869552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.869593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.869634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.869675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.869716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.869755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.869796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.869836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.869864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.869906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.869941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.869976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.870011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.870049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.870094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.870133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.870167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.870203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.870234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.870268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.870306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.870340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.870374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.870411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.870449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.870482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.870519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.870556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.870593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.680 [2024-12-05 20:50:52.870628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.870665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.870699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.870737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.870771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.870808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.870844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.870878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.870912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.870948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.870976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.871012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.871047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.871081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.871128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.871169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.871214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.871380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.871425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.871466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.871508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.871554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.871597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.871637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.872234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.872278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.872326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.872367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.872413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.872459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.872499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.872540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.872581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.872624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.872664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.872709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.872757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.872800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.872839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.872884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.872922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.872964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.873011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.873052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.873095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.873131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.873166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.873200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.873235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.873278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.873321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.873365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.873411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.873446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.873480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.873516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.873545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.873578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.873628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.873671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.873708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.873745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.873781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.873820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.873861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.873896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.873931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.873975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.874002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.874038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.874079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.874116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.874157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.874193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.874229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.874261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.874300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.874333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.874367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.874407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.874442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.874479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.874514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.874552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.874589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.874624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.874662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.874698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.874863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.874907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.874950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.874990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.875031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.875079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.875124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.875163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.875204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.875247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.875289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.875329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.875366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.875394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.875433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.875467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.875500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.681 [2024-12-05 20:50:52.875536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.875572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.875611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.875641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.875674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.875712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.875745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.875783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.875819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.875855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.875893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.875926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.875965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.876002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.876042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.876083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.876118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.876154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.876186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.876225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.876261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.876294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.876332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.876364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.876407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.876451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.876496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.876537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.876581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.876633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.876671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.876716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.876755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.876793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.876831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.876865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.876902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.876939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.876976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.877466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.877511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.877552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.877596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.877638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.877679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.877733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.877772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.877816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.877870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.877910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.877948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.878005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.878047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.878096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.878134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.878172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.878214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.878267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.878305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.878347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.878390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.878435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.878477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.878519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.878562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.878604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.878645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.878687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.878727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.878769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.878815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.878854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.878897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.878940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.878980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.879024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.879073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.879114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.879153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.879196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.879235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.879277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.879326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.879363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.879408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.879451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.879494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.879539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.879582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.879628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.879674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.879712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.879752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.879794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.879838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.879879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.879916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.879951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.879989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.880022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.880052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.880092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.880130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.880288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.880325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.880361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.880395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.682 [2024-12-05 20:50:52.880431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.880466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.880497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.880812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.880856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.880892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.880926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.880954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.880996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.881027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.881070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.881109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.881149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.881184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.881221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.881256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.881295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.881335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.881369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.881408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.881442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.881478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.881513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.881551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.881584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.881616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.881655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.881686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.881711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.881744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.881783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.881817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.881854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.881892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.881930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.881965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.882003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.882039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.882080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.882116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.882154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.882188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.882224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.882258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.882305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.882347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.882393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.882432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.882476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.882521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.882559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.882600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.882640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.882679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.882725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.882764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.882806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.882851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.882890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.883367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.883415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.883460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.883502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.883550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.883588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.883629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.883674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.883717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.883754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.883791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.883826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.883860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.883903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.883945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.883987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.884034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.884072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.884107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.884144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.884181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.884215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.884259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.884296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.884329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.884364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.884398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.884433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.884470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.884497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.884536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.884571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.884608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.884643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.884679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.884717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.884752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.884786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.884818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.683 [2024-12-05 20:50:52.884861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.884905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.884947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.884993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.885031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.885081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.885135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.885177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.885220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.885260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.885298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.885341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.885380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.885420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.885463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.885504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.885550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.885595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.885637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.885678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.885720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.885764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.885807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.885849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.885902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.886080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.886120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.886160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.886203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.886246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.886285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.886331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.886908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.886953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.886990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.887027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.887078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.887111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.887147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.887183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.887227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.887265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.887303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.887342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.887377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.887413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.887454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.887496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.887529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.887567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.887598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.887639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.887678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.887713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.887749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.887785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.887821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.887857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.887892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.887928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.887963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.887998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.888032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.888075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.888109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.888145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.888179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.888206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.888244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.888282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.888321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.888357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.888404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.888442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.888485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.888528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.888569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.888610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.888658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.888704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.888746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.888787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.888827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.888867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.888915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.888955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.889000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.889039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.889086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.889126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.889182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.889222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.889260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.684 [2024-12-05 20:50:52.889301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.889339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.889379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.889537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.889591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.889632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.889674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.889715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.889759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.889798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.889832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.889876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.889910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.889947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.889980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.890019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.890052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.890099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.890136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.890164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.890199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.890235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.890271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.890309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.890345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.890381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.890421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.890464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.890499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.890530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.890563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.890596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.890633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.890669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.890706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.890743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.890783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.890820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.890855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.890889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.890924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.890958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.890994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.891033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.891071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.891115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.891156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.891199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.891242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.891289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.891330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.891369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.891417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.891458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.891506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.891549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.891592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.891635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.891673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.892169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.892214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.892254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.892299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.892339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.892379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.892422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.892469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.892511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.892558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.892597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.892637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.892687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.892722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.892755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.892792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.892826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.892874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.892913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.892947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.892983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.893018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.893054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.893101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.893140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.893175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.893205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.893243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.893303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.893337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.893374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.893406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.893441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.893475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.893513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.893559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.893601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.893638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.893667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.893703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.893740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.893778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.893814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.893850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.893890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.893925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.893962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.893993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.685 [2024-12-05 20:50:52.894028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.894070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.894109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.894148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.894186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.894217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.894252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.894289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.894324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.894359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.894397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.894433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.894470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.894507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.894543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.894577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.894742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.894785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.894828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.894875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.894919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.894963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.895014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.895613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.895658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.895711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.895750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.895791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.895835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.895874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.895917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.895958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.896000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.896044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.896094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.896134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.896185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.896226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.896270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.896318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.896359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.896402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.896440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.896483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.896531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.896579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.896620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.896663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.896704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.896744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.896778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.896813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.896849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.896891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.896932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.896970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.897009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.897049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.897084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.897122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.897153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.897187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.897226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.897261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.897298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.897339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.897388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.897429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.897462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.897498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.897530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.897566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.897603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.897641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.897677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.897714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.897747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.897785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.897825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.897860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.897899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.897937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.897973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.898007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.898042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.898084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.898119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.898277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.898322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.898364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.898404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.898450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.898490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.898529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.898571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.898616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.898656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.898702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.898744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.898787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.898829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.898872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.898917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.898964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.899007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.899047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.899094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.899136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.899184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.899224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.899265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.899315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.899358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.899387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.899418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.899455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.686 [2024-12-05 20:50:52.899487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.899520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.899557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.899593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.899632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.899670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.899714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.899752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.899779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.899813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.899847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.899883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.899917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.899952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.899993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.900035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.900078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.900114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.900158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.900198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.900233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.900270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.900305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.900340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.900378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.900415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.900449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.900914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.900970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.901011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.901053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.901099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.901142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.901184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.901223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.901263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.901306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.901348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.901397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.901445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.901488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.901532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.901573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.901613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.901654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.901696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.901741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.901780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.901831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.901872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.901910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.901954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.901996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.902039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.902090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.902132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.902173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.902221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.902260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.902304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.902346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.902385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.902427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.902469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.902510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.902553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.902594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.902635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.902688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.902726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.902763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.902797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.902833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.902873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.902910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.902948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.902983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.903017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.903052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.903087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.903126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.903162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.903196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.903240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.903280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.903316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.903356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.903390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.903426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.903467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.903503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.903698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.903736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.903774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.903812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.903844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.903883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.903916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.904558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.904604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.904645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.904699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.904738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.904778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.904820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.904860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.904908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.904950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.904988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.905032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.905075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.905106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.905141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.905178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.905213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.905248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.687 [2024-12-05 20:50:52.905282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.905318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.905345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.905379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.905415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.905460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.905495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.905533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.905565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.905602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.905642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.905680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.905712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.905749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.905784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.905819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.905856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.905894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.905928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.905965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.906001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.906036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.906068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.906106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.906145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.906193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.906235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.906275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.906314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.906361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.906402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.906441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.906481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.906524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.906566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.906607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.906649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.906689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.906729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.906780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.906824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.906871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.906908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.906957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.906999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.907042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.907216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.907259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.907311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.907352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.907395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.907446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.907484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.907524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.907564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.907606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.907649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.907690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.907729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.907771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.907813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.907849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.907885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.907921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.907960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.908001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.908037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.908082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.908115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.908153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.908187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.908226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.908258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.908294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.908329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.908372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.908409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.908443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.908485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.908526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.908559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.908593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.908628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.908665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.908702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.908737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.908771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.908810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.908847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.908884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.908916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.908952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.908994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.909037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.909087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.909130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.909169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.909211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.909256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.909295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.909337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.909382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:59.688 [2024-12-05 20:50:52.909858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.909903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.909944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.688 [2024-12-05 20:50:52.909986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.910027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.910075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.910119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.910160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.910196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.910235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.910266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.910297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.910333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.910369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.910407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.910448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.910492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.910527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.910567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.910604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.910639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.910673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.910701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.910738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.910773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.910812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.910848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.910888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.910924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.910959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.910997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.911030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.911072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.911109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.911144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.911179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.911216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.911253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 [2024-12-05 20:50:52.911291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.689 true 00:30:59.689 20:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:30:59.689 20:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:00.685 20:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:00.964 20:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:31:00.964 20:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:31:00.964 true 00:31:00.964 20:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:31:00.964 20:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:01.242 20:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:01.534 20:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:31:01.534 20:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:31:01.534 true 00:31:01.534 20:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:31:01.534 20:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.010 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:03.011 20:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:03.011 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:03.011 20:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:31:03.011 20:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:31:03.011 true 00:31:03.011 20:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:31:03.011 20:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.269 20:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:03.527 20:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:31:03.527 20:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:31:03.527 true 00:31:03.527 20:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:31:03.527 20:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.904 Initializing NVMe Controllers 00:31:04.904 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:04.904 Controller IO queue size 128, less than required. 00:31:04.904 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:04.904 Controller IO queue size 128, less than required. 00:31:04.904 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:04.904 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:04.904 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:04.904 Initialization complete. Launching workers. 00:31:04.904 ======================================================== 00:31:04.904 Latency(us) 00:31:04.904 Device Information : IOPS MiB/s Average min max 00:31:04.904 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2784.52 1.36 29544.66 934.79 1011843.61 00:31:04.904 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17844.30 8.71 7157.13 1453.40 406848.64 00:31:04.904 ======================================================== 00:31:04.904 Total : 20628.82 10.07 10179.05 934.79 1011843.61 00:31:04.904 00:31:04.904 20:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:04.904 20:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:31:04.904 20:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:31:05.163 true 00:31:05.163 20:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 545923 00:31:05.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (545923) - No such process 00:31:05.163 20:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 545923 00:31:05.163 20:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.423 20:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:05.423 20:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:31:05.423 20:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:31:05.423 20:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:31:05.423 20:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:05.423 20:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:31:05.682 null0 00:31:05.682 20:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:05.682 20:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:05.682 20:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:31:05.941 null1 00:31:05.941 20:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:05.941 20:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:05.941 20:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:31:05.941 null2 00:31:05.941 20:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:05.941 20:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:05.941 20:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:31:06.226 null3 00:31:06.226 20:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:06.226 20:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:06.226 20:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:31:06.226 null4 00:31:06.486 20:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:06.486 20:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:06.486 20:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:31:06.486 null5 00:31:06.486 20:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:06.486 20:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:06.486 20:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:31:06.746 null6 00:31:06.746 20:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:06.746 20:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:06.746 20:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:31:06.746 null7 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:06.746 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:31:06.747 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:06.747 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:31:06.747 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:06.747 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:06.747 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.747 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:06.747 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:06.747 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:06.747 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:31:06.747 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:31:06.747 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:06.747 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:06.747 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.747 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:06.747 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:06.747 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:31:06.747 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:06.747 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:31:06.747 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:06.747 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:06.747 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.747 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:06.747 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:06.747 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:31:06.747 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:06.747 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 551725 551727 551728 551730 551732 551734 551736 551738 00:31:06.747 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:31:06.747 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:06.747 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.747 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:07.006 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:07.006 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:07.006 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:07.006 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:07.006 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:07.006 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:07.006 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:07.006 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:07.265 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.265 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.265 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:07.265 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.265 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.265 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:07.265 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.265 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.265 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:07.265 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.265 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.265 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:07.265 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.265 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.265 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:07.265 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.266 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.266 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:07.266 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.266 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.266 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:07.266 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.266 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.266 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:07.525 20:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:07.784 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:07.784 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:07.784 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:07.784 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:07.784 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:07.784 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:07.784 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:07.784 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:08.044 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:08.044 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:08.044 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:08.044 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:08.044 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:08.044 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:08.044 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:08.044 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:08.044 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:08.044 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:08.044 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:08.044 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:08.044 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:08.044 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:08.044 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:08.044 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:08.044 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:08.044 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:08.044 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:08.044 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:08.044 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:08.044 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:08.044 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:08.044 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:08.304 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:08.564 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:08.564 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:08.564 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:08.564 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:08.564 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:08.564 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:08.564 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:08.564 20:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:08.823 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:08.823 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:08.823 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:08.823 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:08.823 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:08.823 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:08.823 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:08.823 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:08.823 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:08.823 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:08.823 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:08.823 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:08.823 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:08.823 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:08.823 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:08.823 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:08.823 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:08.824 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:08.824 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:08.824 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:08.824 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:08.824 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:08.824 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:08.824 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:09.083 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:09.084 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:09.084 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:09.084 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:09.084 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:09.084 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:09.084 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:09.084 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:09.084 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.084 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.084 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:09.084 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.084 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.084 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:09.084 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.084 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.084 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:09.084 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.084 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.084 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:09.084 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.084 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.084 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:09.084 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.084 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.084 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:09.084 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.084 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.084 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:09.084 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.084 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.084 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:09.343 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:09.343 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:09.343 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:09.343 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:09.343 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:09.343 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:09.343 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:09.343 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:09.602 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.602 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.602 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:09.602 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.602 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.602 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:09.602 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.602 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.602 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:09.602 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.602 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.602 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:09.602 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.602 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.602 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.602 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:09.602 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.602 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:09.602 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.602 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.602 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:09.602 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.602 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.602 20:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:09.602 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:09.862 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:09.862 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:09.862 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:09.862 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:09.862 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:09.862 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:09.862 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:09.862 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.862 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.862 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:09.862 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.862 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.862 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:09.862 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.862 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.862 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:09.862 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.862 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.862 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:09.862 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.862 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.862 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:09.862 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.862 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.862 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:09.862 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.862 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.862 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:09.862 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.862 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.862 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:10.121 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:10.121 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:10.121 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:10.121 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:10.121 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:10.121 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:10.121 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:10.121 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:10.380 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.380 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.380 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:10.380 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.380 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.380 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:10.380 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.380 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.380 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:10.380 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.380 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.380 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:10.380 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.380 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.380 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:10.380 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.380 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.380 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:10.380 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.380 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.380 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:10.380 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.380 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.380 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:10.380 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:10.380 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:10.639 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:10.639 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:10.639 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:10.639 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:10.639 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:10.639 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:10.639 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.639 20:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.639 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.639 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.639 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.639 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.639 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.639 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.639 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.639 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.639 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.639 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.639 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.639 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.639 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.639 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.639 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:31:10.639 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:31:10.639 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:10.639 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:31:10.639 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:10.639 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:31:10.639 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:10.639 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:10.639 rmmod nvme_tcp 00:31:10.639 rmmod nvme_fabrics 00:31:10.639 rmmod nvme_keyring 00:31:10.898 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:10.898 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:31:10.898 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:31:10.898 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 545552 ']' 00:31:10.898 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 545552 00:31:10.898 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 545552 ']' 00:31:10.898 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 545552 00:31:10.898 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:31:10.898 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:10.898 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 545552 00:31:10.898 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:10.898 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:10.898 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 545552' 00:31:10.898 killing process with pid 545552 00:31:10.898 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 545552 00:31:10.898 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 545552 00:31:10.898 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:10.898 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:10.898 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:10.898 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:31:10.898 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:31:10.898 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:10.898 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:31:10.898 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:10.898 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:10.898 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.898 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:10.898 20:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:13.434 00:31:13.434 real 0m47.236s 00:31:13.434 user 2m57.228s 00:31:13.434 sys 0m19.857s 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:13.434 ************************************ 00:31:13.434 END TEST nvmf_ns_hotplug_stress 00:31:13.434 ************************************ 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:13.434 ************************************ 00:31:13.434 START TEST nvmf_delete_subsystem 00:31:13.434 ************************************ 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:13.434 * Looking for test storage... 00:31:13.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:31:13.434 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:13.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.435 --rc genhtml_branch_coverage=1 00:31:13.435 --rc genhtml_function_coverage=1 00:31:13.435 --rc genhtml_legend=1 00:31:13.435 --rc geninfo_all_blocks=1 00:31:13.435 --rc geninfo_unexecuted_blocks=1 00:31:13.435 00:31:13.435 ' 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:13.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.435 --rc genhtml_branch_coverage=1 00:31:13.435 --rc genhtml_function_coverage=1 00:31:13.435 --rc genhtml_legend=1 00:31:13.435 --rc geninfo_all_blocks=1 00:31:13.435 --rc geninfo_unexecuted_blocks=1 00:31:13.435 00:31:13.435 ' 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:13.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.435 --rc genhtml_branch_coverage=1 00:31:13.435 --rc genhtml_function_coverage=1 00:31:13.435 --rc genhtml_legend=1 00:31:13.435 --rc geninfo_all_blocks=1 00:31:13.435 --rc geninfo_unexecuted_blocks=1 00:31:13.435 00:31:13.435 ' 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:13.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.435 --rc genhtml_branch_coverage=1 00:31:13.435 --rc genhtml_function_coverage=1 00:31:13.435 --rc genhtml_legend=1 00:31:13.435 --rc geninfo_all_blocks=1 00:31:13.435 --rc geninfo_unexecuted_blocks=1 00:31:13.435 00:31:13.435 ' 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:13.435 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:13.436 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.436 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:13.436 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:13.436 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:13.436 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:13.436 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:31:13.436 20:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:20.005 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:20.005 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:20.005 Found net devices under 0000:af:00.0: cvl_0_0 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:20.005 Found net devices under 0000:af:00.1: cvl_0_1 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:20.005 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:20.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:20.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:31:20.006 00:31:20.006 --- 10.0.0.2 ping statistics --- 00:31:20.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.006 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:20.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:20.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:31:20.006 00:31:20.006 --- 10.0.0.1 ping statistics --- 00:31:20.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.006 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=556709 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 556709 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 556709 ']' 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:20.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:20.006 [2024-12-05 20:51:12.651744] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:20.006 [2024-12-05 20:51:12.652640] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:31:20.006 [2024-12-05 20:51:12.652679] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:20.006 [2024-12-05 20:51:12.729209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:20.006 [2024-12-05 20:51:12.769293] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:20.006 [2024-12-05 20:51:12.769328] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:20.006 [2024-12-05 20:51:12.769335] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:20.006 [2024-12-05 20:51:12.769341] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:20.006 [2024-12-05 20:51:12.769345] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:20.006 [2024-12-05 20:51:12.770516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:20.006 [2024-12-05 20:51:12.770516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:20.006 [2024-12-05 20:51:12.838229] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:20.006 [2024-12-05 20:51:12.838786] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:20.006 [2024-12-05 20:51:12.838962] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:20.006 [2024-12-05 20:51:12.915252] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:20.006 [2024-12-05 20:51:12.939503] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:20.006 NULL1 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:20.006 Delay0 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=556912 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:31:20.006 20:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:20.006 [2024-12-05 20:51:13.046628] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:21.904 20:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:21.904 20:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.904 20:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:21.904 Read completed with error (sct=0, sc=8) 00:31:21.904 starting I/O failed: -6 00:31:21.904 Read completed with error (sct=0, sc=8) 00:31:21.904 Read completed with error (sct=0, sc=8) 00:31:21.904 Read completed with error (sct=0, sc=8) 00:31:21.904 Read completed with error (sct=0, sc=8) 00:31:21.904 starting I/O failed: -6 00:31:21.904 Write completed with error (sct=0, sc=8) 00:31:21.904 Read completed with error (sct=0, sc=8) 00:31:21.904 Read completed with error (sct=0, sc=8) 00:31:21.904 Read completed with error (sct=0, sc=8) 00:31:21.904 starting I/O failed: -6 00:31:21.904 Read completed with error (sct=0, sc=8) 00:31:21.904 Write completed with error (sct=0, sc=8) 00:31:21.904 Read completed with error (sct=0, sc=8) 00:31:21.904 Read completed with error (sct=0, sc=8) 00:31:21.904 starting I/O failed: -6 00:31:21.904 Read completed with error (sct=0, sc=8) 00:31:21.904 Write completed with error (sct=0, sc=8) 00:31:21.904 Write completed with error (sct=0, sc=8) 00:31:21.904 Read completed with error (sct=0, sc=8) 00:31:21.904 starting I/O failed: -6 00:31:21.904 Write completed with error (sct=0, sc=8) 00:31:21.904 Read completed with error (sct=0, sc=8) 00:31:21.904 Read completed with error (sct=0, sc=8) 00:31:21.904 Write completed with error (sct=0, sc=8) 00:31:21.904 starting I/O failed: -6 00:31:21.904 Read completed with error (sct=0, sc=8) 00:31:21.904 Read completed with error (sct=0, sc=8) 00:31:21.904 Write completed with error (sct=0, sc=8) 00:31:21.904 Read completed with error (sct=0, sc=8) 00:31:21.904 starting I/O failed: -6 00:31:21.904 Write completed with error (sct=0, sc=8) 00:31:21.904 Read completed with error (sct=0, sc=8) 00:31:21.904 Write completed with error (sct=0, sc=8) 00:31:21.904 Read completed with error (sct=0, sc=8) 00:31:21.904 starting I/O failed: -6 00:31:21.904 Read completed with error (sct=0, sc=8) 00:31:21.904 Read completed with error (sct=0, sc=8) 00:31:21.904 Read completed with error (sct=0, sc=8) 00:31:21.904 Write completed with error (sct=0, sc=8) 00:31:21.904 starting I/O failed: -6 00:31:21.904 Read completed with error (sct=0, sc=8) 00:31:21.904 Read completed with error (sct=0, sc=8) 00:31:21.904 Read completed with error (sct=0, sc=8) 00:31:21.904 Read completed with error (sct=0, sc=8) 00:31:21.904 starting I/O failed: -6 00:31:21.904 Read completed with error (sct=0, sc=8) 00:31:21.904 Write completed with error (sct=0, sc=8) 00:31:21.904 Write completed with error (sct=0, sc=8) 00:31:21.904 Write completed with error (sct=0, sc=8) 00:31:21.904 starting I/O failed: -6 00:31:21.904 Write completed with error (sct=0, sc=8) 00:31:21.904 Read completed with error (sct=0, sc=8) 00:31:21.904 Read completed with error (sct=0, sc=8) 00:31:21.904 Write completed with error (sct=0, sc=8) 00:31:21.904 starting I/O failed: -6 00:31:21.904 [2024-12-05 20:51:15.137967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3f0e0 is same with the state(6) to be set 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 starting I/O failed: -6 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 starting I/O failed: -6 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 starting I/O failed: -6 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 starting I/O failed: -6 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 starting I/O failed: -6 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 starting I/O failed: -6 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 starting I/O failed: -6 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 starting I/O failed: -6 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 starting I/O failed: -6 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 starting I/O failed: -6 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 starting I/O failed: -6 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 [2024-12-05 20:51:15.138440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f996c00d350 is same with the state(6) to be set 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Write completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:21.905 Read completed with error (sct=0, sc=8) 00:31:22.840 [2024-12-05 20:51:16.102226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa405f0 is same with the state(6) to be set 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Write completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Write completed with error (sct=0, sc=8) 00:31:22.840 Write completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Write completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Write completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Write completed with error (sct=0, sc=8) 00:31:22.840 Write completed with error (sct=0, sc=8) 00:31:22.840 Write completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 [2024-12-05 20:51:16.140479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3ef00 is same with the state(6) to be set 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Write completed with error (sct=0, sc=8) 00:31:22.840 Write completed with error (sct=0, sc=8) 00:31:22.840 Write completed with error (sct=0, sc=8) 00:31:22.840 Write completed with error (sct=0, sc=8) 00:31:22.840 Write completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.840 Read completed with error (sct=0, sc=8) 00:31:22.841 [2024-12-05 20:51:16.140990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f996c00d020 is same with the state(6) to be set 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Write completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Write completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Write completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Write completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 [2024-12-05 20:51:16.141093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f996c00d680 is same with the state(6) to be set 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Write completed with error (sct=0, sc=8) 00:31:22.841 Write completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Write completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Write completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Write completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 Read completed with error (sct=0, sc=8) 00:31:22.841 [2024-12-05 20:51:16.141849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3f4a0 is same with the state(6) to be set 00:31:22.841 Initializing NVMe Controllers 00:31:22.841 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:22.841 Controller IO queue size 128, less than required. 00:31:22.841 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:22.841 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:22.841 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:22.841 Initialization complete. Launching workers. 00:31:22.841 ======================================================== 00:31:22.841 Latency(us) 00:31:22.841 Device Information : IOPS MiB/s Average min max 00:31:22.841 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.73 0.08 897087.56 429.59 1042692.70 00:31:22.841 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 165.27 0.08 906176.15 236.43 1012932.16 00:31:22.841 ======================================================== 00:31:22.841 Total : 335.00 0.16 901571.26 236.43 1042692.70 00:31:22.841 00:31:22.841 [2024-12-05 20:51:16.142524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa405f0 (9): Bad file descriptor 00:31:22.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:22.841 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.841 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:31:22.841 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 556912 00:31:22.841 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:31:23.408 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:31:23.408 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 556912 00:31:23.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (556912) - No such process 00:31:23.408 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 556912 00:31:23.408 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:31:23.408 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 556912 00:31:23.408 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:31:23.408 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:23.408 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:31:23.408 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:23.408 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 556912 00:31:23.408 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:31:23.408 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:23.408 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:23.408 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:23.408 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:23.408 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.408 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:23.408 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.408 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:23.408 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.408 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:23.408 [2024-12-05 20:51:16.671652] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:23.408 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.408 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:23.408 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.408 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:23.408 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.408 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=557443 00:31:23.408 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:31:23.408 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:23.408 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 557443 00:31:23.409 20:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:23.409 [2024-12-05 20:51:16.756532] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:23.976 20:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:23.976 20:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 557443 00:31:23.976 20:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:24.545 20:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:24.545 20:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 557443 00:31:24.545 20:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:24.804 20:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:24.804 20:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 557443 00:31:24.804 20:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:25.372 20:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:25.372 20:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 557443 00:31:25.372 20:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:25.941 20:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:25.941 20:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 557443 00:31:25.941 20:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:26.509 20:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:26.509 20:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 557443 00:31:26.509 20:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:26.768 Initializing NVMe Controllers 00:31:26.768 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:26.768 Controller IO queue size 128, less than required. 00:31:26.768 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:26.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:26.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:26.768 Initialization complete. Launching workers. 00:31:26.768 ======================================================== 00:31:26.768 Latency(us) 00:31:26.768 Device Information : IOPS MiB/s Average min max 00:31:26.768 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002556.32 1000149.73 1043593.68 00:31:26.768 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003429.64 1000126.67 1041212.34 00:31:26.768 ======================================================== 00:31:26.769 Total : 256.00 0.12 1002992.98 1000126.67 1043593.68 00:31:26.769 00:31:27.028 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:27.028 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 557443 00:31:27.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (557443) - No such process 00:31:27.028 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 557443 00:31:27.028 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:27.028 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:31:27.028 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:27.028 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:31:27.028 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:27.028 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:31:27.028 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:27.028 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:27.028 rmmod nvme_tcp 00:31:27.028 rmmod nvme_fabrics 00:31:27.028 rmmod nvme_keyring 00:31:27.028 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:27.028 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:31:27.028 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:31:27.028 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 556709 ']' 00:31:27.028 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 556709 00:31:27.028 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 556709 ']' 00:31:27.028 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 556709 00:31:27.028 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:31:27.028 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:27.028 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 556709 00:31:27.028 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:27.028 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:27.028 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 556709' 00:31:27.028 killing process with pid 556709 00:31:27.028 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 556709 00:31:27.028 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 556709 00:31:27.287 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:27.287 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:27.287 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:27.287 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:31:27.287 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:31:27.287 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:27.287 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:31:27.287 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:27.287 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:27.287 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:27.287 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:27.287 20:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:29.195 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:29.195 00:31:29.195 real 0m16.108s 00:31:29.195 user 0m26.102s 00:31:29.195 sys 0m6.058s 00:31:29.195 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:29.195 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:29.195 ************************************ 00:31:29.195 END TEST nvmf_delete_subsystem 00:31:29.195 ************************************ 00:31:29.195 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:29.195 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:29.195 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:29.195 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:29.455 ************************************ 00:31:29.455 START TEST nvmf_host_management 00:31:29.455 ************************************ 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:29.456 * Looking for test storage... 00:31:29.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:29.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.456 --rc genhtml_branch_coverage=1 00:31:29.456 --rc genhtml_function_coverage=1 00:31:29.456 --rc genhtml_legend=1 00:31:29.456 --rc geninfo_all_blocks=1 00:31:29.456 --rc geninfo_unexecuted_blocks=1 00:31:29.456 00:31:29.456 ' 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:29.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.456 --rc genhtml_branch_coverage=1 00:31:29.456 --rc genhtml_function_coverage=1 00:31:29.456 --rc genhtml_legend=1 00:31:29.456 --rc geninfo_all_blocks=1 00:31:29.456 --rc geninfo_unexecuted_blocks=1 00:31:29.456 00:31:29.456 ' 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:29.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.456 --rc genhtml_branch_coverage=1 00:31:29.456 --rc genhtml_function_coverage=1 00:31:29.456 --rc genhtml_legend=1 00:31:29.456 --rc geninfo_all_blocks=1 00:31:29.456 --rc geninfo_unexecuted_blocks=1 00:31:29.456 00:31:29.456 ' 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:29.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.456 --rc genhtml_branch_coverage=1 00:31:29.456 --rc genhtml_function_coverage=1 00:31:29.456 --rc genhtml_legend=1 00:31:29.456 --rc geninfo_all_blocks=1 00:31:29.456 --rc geninfo_unexecuted_blocks=1 00:31:29.456 00:31:29.456 ' 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.456 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.457 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:31:29.457 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.457 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:31:29.457 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:29.457 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:29.457 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:29.457 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:29.457 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:29.457 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:29.457 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:29.457 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:29.457 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:29.457 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:29.457 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:29.457 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:29.457 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:31:29.457 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:29.457 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:29.457 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:29.457 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:29.457 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:29.457 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:29.457 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:29.457 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:29.457 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:29.457 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:29.457 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:31:29.457 20:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:36.025 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:36.025 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:36.026 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:36.026 Found net devices under 0000:af:00.0: cvl_0_0 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:36.026 Found net devices under 0000:af:00.1: cvl_0_1 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:36.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:36.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:31:36.026 00:31:36.026 --- 10.0.0.2 ping statistics --- 00:31:36.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.026 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:36.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:36.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:31:36.026 00:31:36.026 --- 10.0.0.1 ping statistics --- 00:31:36.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.026 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=561701 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 561701 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 561701 ']' 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:36.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:36.026 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:36.027 20:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:36.027 [2024-12-05 20:51:28.866235] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:36.027 [2024-12-05 20:51:28.867113] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:31:36.027 [2024-12-05 20:51:28.867149] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:36.027 [2024-12-05 20:51:28.943714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:36.027 [2024-12-05 20:51:28.983235] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:36.027 [2024-12-05 20:51:28.983272] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:36.027 [2024-12-05 20:51:28.983279] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:36.027 [2024-12-05 20:51:28.983285] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:36.027 [2024-12-05 20:51:28.983290] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:36.027 [2024-12-05 20:51:28.984770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:36.027 [2024-12-05 20:51:28.984884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:36.027 [2024-12-05 20:51:28.985004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:36.027 [2024-12-05 20:51:28.985005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:36.027 [2024-12-05 20:51:29.051270] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:36.027 [2024-12-05 20:51:29.052114] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:36.027 [2024-12-05 20:51:29.052190] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:36.027 [2024-12-05 20:51:29.052379] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:36.027 [2024-12-05 20:51:29.052444] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:36.285 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:36.285 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:36.285 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:36.285 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:36.285 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:36.545 [2024-12-05 20:51:29.737692] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:36.545 Malloc0 00:31:36.545 [2024-12-05 20:51:29.829948] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=561998 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 561998 /var/tmp/bdevperf.sock 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 561998 ']' 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:36.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:36.545 { 00:31:36.545 "params": { 00:31:36.545 "name": "Nvme$subsystem", 00:31:36.545 "trtype": "$TEST_TRANSPORT", 00:31:36.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:36.545 "adrfam": "ipv4", 00:31:36.545 "trsvcid": "$NVMF_PORT", 00:31:36.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:36.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:36.545 "hdgst": ${hdgst:-false}, 00:31:36.545 "ddgst": ${ddgst:-false} 00:31:36.545 }, 00:31:36.545 "method": "bdev_nvme_attach_controller" 00:31:36.545 } 00:31:36.545 EOF 00:31:36.545 )") 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:36.545 20:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:36.545 "params": { 00:31:36.545 "name": "Nvme0", 00:31:36.545 "trtype": "tcp", 00:31:36.545 "traddr": "10.0.0.2", 00:31:36.545 "adrfam": "ipv4", 00:31:36.545 "trsvcid": "4420", 00:31:36.545 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:36.545 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:36.545 "hdgst": false, 00:31:36.545 "ddgst": false 00:31:36.545 }, 00:31:36.545 "method": "bdev_nvme_attach_controller" 00:31:36.545 }' 00:31:36.546 [2024-12-05 20:51:29.925052] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:31:36.546 [2024-12-05 20:51:29.925102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid561998 ] 00:31:36.806 [2024-12-05 20:51:29.999086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:36.806 [2024-12-05 20:51:30.043666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.806 Running I/O for 10 seconds... 00:31:37.372 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:37.372 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:37.372 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:37.372 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.372 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:37.372 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.372 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:37.372 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:31:37.372 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:37.372 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:31:37.372 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:31:37.372 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:31:37.372 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:31:37.372 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:37.372 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:37.372 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:37.372 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.372 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:37.372 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.372 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1107 00:31:37.372 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1107 -ge 100 ']' 00:31:37.372 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:31:37.372 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:31:37.372 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:31:37.372 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:37.372 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.631 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:37.631 [2024-12-05 20:51:30.817414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dbc50 is same with the state(6) to be set 00:31:37.631 [2024-12-05 20:51:30.817454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dbc50 is same with the state(6) to be set 00:31:37.631 [2024-12-05 20:51:30.817461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dbc50 is same with the state(6) to be set 00:31:37.631 [2024-12-05 20:51:30.817467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dbc50 is same with the state(6) to be set 00:31:37.631 [2024-12-05 20:51:30.817473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dbc50 is same with the state(6) to be set 00:31:37.631 [2024-12-05 20:51:30.817479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dbc50 is same with the state(6) to be set 00:31:37.631 [2024-12-05 20:51:30.817484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dbc50 is same with the state(6) to be set 00:31:37.631 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.631 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:37.631 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.631 [2024-12-05 20:51:30.822783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:37.631 [2024-12-05 20:51:30.822814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.631 [2024-12-05 20:51:30.822824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:37.632 [2024-12-05 20:51:30.822837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.822845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:37.632 [2024-12-05 20:51:30.822851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.822858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:37.632 [2024-12-05 20:51:30.822864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:37.632 [2024-12-05 20:51:30.822871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d4630 is same with the state(6) to be set 00:31:37.632 [2024-12-05 20:51:30.822913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.822923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.822935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.822941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.822949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.822954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.822962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.822968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.822975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.822981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.822989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.822995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.823003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.823009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.823016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.823022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.823030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.823037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.823044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.823052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.823065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.823071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.823078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.823085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.823091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.823099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.823106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.823113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.823120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.823127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.823134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.823140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.823147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.823153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.823161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.823167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.823175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.823180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.823188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.823194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.823201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.823206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.823214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.823220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.823229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.823235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.823242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.823247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.823254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.823260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.823268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.823274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.823281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.823287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.823294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.823300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.823307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.823313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.823319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.823325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.823332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.823344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.823351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.632 [2024-12-05 20:51:30.823357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.632 [2024-12-05 20:51:30.823364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.823377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.823391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.823406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.823419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.823432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.823445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.823458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.823471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.823484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.823496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.823509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.823523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.823537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.823550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.823567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.823580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.823593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.823606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.823619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.823633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.823646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.823659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.823671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.823684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.823697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.823710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.823723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.823737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.823751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.823764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.823779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.633 [2024-12-05 20:51:30.823784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.633 [2024-12-05 20:51:30.824650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:37.633 task offset: 24576 on job bdev=Nvme0n1 fails 00:31:37.633 00:31:37.633 Latency(us) 00:31:37.633 [2024-12-05T19:51:31.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:37.633 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:37.633 Job: Nvme0n1 ended in about 0.58 seconds with error 00:31:37.633 Verification LBA range: start 0x0 length 0x400 00:31:37.633 Nvme0n1 : 0.58 2089.52 130.60 109.97 0.00 28527.06 1467.11 37891.72 00:31:37.633 [2024-12-05T19:51:31.074Z] =================================================================================================================== 00:31:37.633 [2024-12-05T19:51:31.074Z] Total : 2089.52 130.60 109.97 0.00 28527.06 1467.11 37891.72 00:31:37.633 [2024-12-05 20:51:30.826830] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:37.633 [2024-12-05 20:51:30.826848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20d4630 (9): Bad file descriptor 00:31:37.633 [2024-12-05 20:51:30.829588] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:31:37.634 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.634 20:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:38.570 20:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 561998 00:31:38.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (561998) - No such process 00:31:38.570 20:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:38.570 20:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:38.570 20:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:38.570 20:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:38.570 20:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:38.570 20:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:38.570 20:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:38.570 20:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:38.570 { 00:31:38.570 "params": { 00:31:38.570 "name": "Nvme$subsystem", 00:31:38.570 "trtype": "$TEST_TRANSPORT", 00:31:38.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:38.570 "adrfam": "ipv4", 00:31:38.570 "trsvcid": "$NVMF_PORT", 00:31:38.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:38.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:38.570 "hdgst": ${hdgst:-false}, 00:31:38.570 "ddgst": ${ddgst:-false} 00:31:38.570 }, 00:31:38.570 "method": "bdev_nvme_attach_controller" 00:31:38.570 } 00:31:38.570 EOF 00:31:38.570 )") 00:31:38.570 20:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:38.570 20:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:38.570 20:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:38.570 20:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:38.570 "params": { 00:31:38.570 "name": "Nvme0", 00:31:38.570 "trtype": "tcp", 00:31:38.570 "traddr": "10.0.0.2", 00:31:38.570 "adrfam": "ipv4", 00:31:38.570 "trsvcid": "4420", 00:31:38.570 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:38.570 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:38.570 "hdgst": false, 00:31:38.570 "ddgst": false 00:31:38.570 }, 00:31:38.570 "method": "bdev_nvme_attach_controller" 00:31:38.570 }' 00:31:38.570 [2024-12-05 20:51:31.885527] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:31:38.570 [2024-12-05 20:51:31.885571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid562281 ] 00:31:38.570 [2024-12-05 20:51:31.957526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:38.570 [2024-12-05 20:51:31.992989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.829 Running I/O for 1 seconds... 00:31:40.206 2187.00 IOPS, 136.69 MiB/s 00:31:40.206 Latency(us) 00:31:40.206 [2024-12-05T19:51:33.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:40.206 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:40.206 Verification LBA range: start 0x0 length 0x400 00:31:40.206 Nvme0n1 : 1.01 2239.80 139.99 0.00 0.00 28052.93 2129.92 24546.21 00:31:40.206 [2024-12-05T19:51:33.647Z] =================================================================================================================== 00:31:40.206 [2024-12-05T19:51:33.647Z] Total : 2239.80 139.99 0.00 0.00 28052.93 2129.92 24546.21 00:31:40.206 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:31:40.206 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:31:40.206 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:40.206 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:40.206 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:31:40.206 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:40.206 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:31:40.206 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:40.206 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:31:40.206 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:40.206 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:40.206 rmmod nvme_tcp 00:31:40.206 rmmod nvme_fabrics 00:31:40.206 rmmod nvme_keyring 00:31:40.206 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:40.206 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:31:40.206 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:31:40.206 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 561701 ']' 00:31:40.206 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 561701 00:31:40.206 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 561701 ']' 00:31:40.206 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 561701 00:31:40.206 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:31:40.206 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:40.206 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 561701 00:31:40.206 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:40.206 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:40.206 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 561701' 00:31:40.206 killing process with pid 561701 00:31:40.206 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 561701 00:31:40.206 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 561701 00:31:40.464 [2024-12-05 20:51:33.718227] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:31:40.465 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:40.465 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:40.465 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:40.465 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:31:40.465 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:31:40.465 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:40.465 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:31:40.465 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:40.465 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:40.465 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.465 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:40.465 20:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:42.995 20:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:42.995 20:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:42.995 00:31:42.995 real 0m13.179s 00:31:42.995 user 0m18.939s 00:31:42.995 sys 0m6.443s 00:31:42.995 20:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:42.995 20:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:42.995 ************************************ 00:31:42.995 END TEST nvmf_host_management 00:31:42.995 ************************************ 00:31:42.995 20:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:42.995 20:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:42.995 20:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:42.995 20:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:42.995 ************************************ 00:31:42.995 START TEST nvmf_lvol 00:31:42.995 ************************************ 00:31:42.995 20:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:42.995 * Looking for test storage... 00:31:42.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:42.995 20:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:42.995 20:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:31:42.995 20:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:42.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.996 --rc genhtml_branch_coverage=1 00:31:42.996 --rc genhtml_function_coverage=1 00:31:42.996 --rc genhtml_legend=1 00:31:42.996 --rc geninfo_all_blocks=1 00:31:42.996 --rc geninfo_unexecuted_blocks=1 00:31:42.996 00:31:42.996 ' 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:42.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.996 --rc genhtml_branch_coverage=1 00:31:42.996 --rc genhtml_function_coverage=1 00:31:42.996 --rc genhtml_legend=1 00:31:42.996 --rc geninfo_all_blocks=1 00:31:42.996 --rc geninfo_unexecuted_blocks=1 00:31:42.996 00:31:42.996 ' 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:42.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.996 --rc genhtml_branch_coverage=1 00:31:42.996 --rc genhtml_function_coverage=1 00:31:42.996 --rc genhtml_legend=1 00:31:42.996 --rc geninfo_all_blocks=1 00:31:42.996 --rc geninfo_unexecuted_blocks=1 00:31:42.996 00:31:42.996 ' 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:42.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.996 --rc genhtml_branch_coverage=1 00:31:42.996 --rc genhtml_function_coverage=1 00:31:42.996 --rc genhtml_legend=1 00:31:42.996 --rc geninfo_all_blocks=1 00:31:42.996 --rc geninfo_unexecuted_blocks=1 00:31:42.996 00:31:42.996 ' 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:42.996 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:42.997 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:42.997 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:42.997 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:42.997 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:42.997 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:42.997 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:42.997 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:42.997 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:31:42.997 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:31:42.997 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:42.997 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:31:42.997 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:42.997 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:42.997 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:42.997 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:42.997 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:42.997 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:42.997 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:42.997 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:42.997 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:42.997 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:42.997 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:31:42.997 20:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:48.273 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:48.273 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:48.273 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:48.273 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:48.273 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:48.273 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:48.273 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:48.273 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:48.273 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:48.273 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:48.273 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:48.273 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:48.533 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:48.533 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:48.533 Found net devices under 0000:af:00.0: cvl_0_0 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:48.533 Found net devices under 0000:af:00.1: cvl_0_1 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:48.533 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:48.793 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:48.793 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:48.793 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:48.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:48.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.422 ms 00:31:48.793 00:31:48.793 --- 10.0.0.2 ping statistics --- 00:31:48.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.793 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:31:48.793 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:48.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:48.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:31:48.793 00:31:48.793 --- 10.0.0.1 ping statistics --- 00:31:48.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.793 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:31:48.793 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:48.793 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:31:48.793 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:48.793 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:48.793 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:48.793 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:48.793 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:48.793 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:48.793 20:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:48.793 20:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:48.793 20:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:48.793 20:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:48.793 20:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:48.793 20:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=566269 00:31:48.793 20:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:48.793 20:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 566269 00:31:48.793 20:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 566269 ']' 00:31:48.793 20:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:48.793 20:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:48.793 20:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:48.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:48.793 20:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:48.794 20:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:48.794 [2024-12-05 20:51:42.099050] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:48.794 [2024-12-05 20:51:42.099951] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:31:48.794 [2024-12-05 20:51:42.099987] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:48.794 [2024-12-05 20:51:42.178669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:48.794 [2024-12-05 20:51:42.216561] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:48.794 [2024-12-05 20:51:42.216595] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:48.794 [2024-12-05 20:51:42.216601] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:48.794 [2024-12-05 20:51:42.216606] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:48.794 [2024-12-05 20:51:42.216611] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:48.794 [2024-12-05 20:51:42.217964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:48.794 [2024-12-05 20:51:42.218096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:48.794 [2024-12-05 20:51:42.218097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:49.053 [2024-12-05 20:51:42.285265] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:49.053 [2024-12-05 20:51:42.286004] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:49.053 [2024-12-05 20:51:42.286324] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:49.053 [2024-12-05 20:51:42.286391] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:49.620 20:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:49.620 20:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:31:49.620 20:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:49.620 20:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:49.620 20:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:49.620 20:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:49.620 20:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:49.880 [2024-12-05 20:51:43.110850] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:49.880 20:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:50.140 20:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:50.140 20:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:50.140 20:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:50.140 20:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:50.400 20:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:50.659 20:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ef5f9a6d-e923-4efd-a69a-bc32dce2b868 00:31:50.659 20:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ef5f9a6d-e923-4efd-a69a-bc32dce2b868 lvol 20 00:31:50.917 20:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=2652caa3-49ac-40bb-bacf-0487abcd9b2e 00:31:50.917 20:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:50.917 20:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2652caa3-49ac-40bb-bacf-0487abcd9b2e 00:31:51.175 20:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:51.433 [2024-12-05 20:51:44.626756] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:51.433 20:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:51.433 20:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=566723 00:31:51.433 20:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:51.433 20:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:52.809 20:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 2652caa3-49ac-40bb-bacf-0487abcd9b2e MY_SNAPSHOT 00:31:52.809 20:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=fd35a21b-9404-49df-be87-bdd3f35e4daf 00:31:52.809 20:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 2652caa3-49ac-40bb-bacf-0487abcd9b2e 30 00:31:53.068 20:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone fd35a21b-9404-49df-be87-bdd3f35e4daf MY_CLONE 00:31:53.327 20:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=834cfa9e-b677-41ba-b93d-54f22bcb3afa 00:31:53.327 20:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 834cfa9e-b677-41ba-b93d-54f22bcb3afa 00:31:53.586 20:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 566723 00:32:01.705 Initializing NVMe Controllers 00:32:01.705 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:01.705 Controller IO queue size 128, less than required. 00:32:01.705 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:01.705 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:32:01.705 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:32:01.705 Initialization complete. Launching workers. 00:32:01.705 ======================================================== 00:32:01.705 Latency(us) 00:32:01.705 Device Information : IOPS MiB/s Average min max 00:32:01.705 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 13495.60 52.72 9484.49 2326.62 56595.23 00:32:01.705 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 13254.40 51.77 9658.09 3160.62 60492.15 00:32:01.705 ======================================================== 00:32:01.705 Total : 26750.00 104.49 9570.50 2326.62 60492.15 00:32:01.705 00:32:01.705 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:01.964 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2652caa3-49ac-40bb-bacf-0487abcd9b2e 00:32:01.964 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ef5f9a6d-e923-4efd-a69a-bc32dce2b868 00:32:02.222 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:32:02.223 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:32:02.223 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:32:02.223 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:02.223 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:32:02.223 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:02.223 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:32:02.223 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:02.223 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:02.223 rmmod nvme_tcp 00:32:02.223 rmmod nvme_fabrics 00:32:02.223 rmmod nvme_keyring 00:32:02.223 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:02.223 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:32:02.223 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:32:02.223 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 566269 ']' 00:32:02.223 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 566269 00:32:02.223 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 566269 ']' 00:32:02.223 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 566269 00:32:02.223 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:32:02.223 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:02.223 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 566269 00:32:02.482 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:02.482 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:02.482 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 566269' 00:32:02.482 killing process with pid 566269 00:32:02.482 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 566269 00:32:02.482 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 566269 00:32:02.482 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:02.482 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:02.482 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:02.482 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:32:02.482 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:32:02.482 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:02.482 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:32:02.482 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:02.482 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:02.482 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:02.482 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:02.482 20:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.015 20:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:05.015 00:32:05.015 real 0m22.086s 00:32:05.015 user 0m54.807s 00:32:05.015 sys 0m9.633s 00:32:05.015 20:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:05.015 20:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:05.015 ************************************ 00:32:05.015 END TEST nvmf_lvol 00:32:05.015 ************************************ 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:05.015 ************************************ 00:32:05.015 START TEST nvmf_lvs_grow 00:32:05.015 ************************************ 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:05.015 * Looking for test storage... 00:32:05.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:05.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.015 --rc genhtml_branch_coverage=1 00:32:05.015 --rc genhtml_function_coverage=1 00:32:05.015 --rc genhtml_legend=1 00:32:05.015 --rc geninfo_all_blocks=1 00:32:05.015 --rc geninfo_unexecuted_blocks=1 00:32:05.015 00:32:05.015 ' 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:05.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.015 --rc genhtml_branch_coverage=1 00:32:05.015 --rc genhtml_function_coverage=1 00:32:05.015 --rc genhtml_legend=1 00:32:05.015 --rc geninfo_all_blocks=1 00:32:05.015 --rc geninfo_unexecuted_blocks=1 00:32:05.015 00:32:05.015 ' 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:05.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.015 --rc genhtml_branch_coverage=1 00:32:05.015 --rc genhtml_function_coverage=1 00:32:05.015 --rc genhtml_legend=1 00:32:05.015 --rc geninfo_all_blocks=1 00:32:05.015 --rc geninfo_unexecuted_blocks=1 00:32:05.015 00:32:05.015 ' 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:05.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.015 --rc genhtml_branch_coverage=1 00:32:05.015 --rc genhtml_function_coverage=1 00:32:05.015 --rc genhtml_legend=1 00:32:05.015 --rc geninfo_all_blocks=1 00:32:05.015 --rc geninfo_unexecuted_blocks=1 00:32:05.015 00:32:05.015 ' 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:05.015 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:32:05.016 20:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:11.586 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:11.586 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:32:11.586 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:11.586 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:11.586 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:11.586 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:11.586 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:11.586 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:32:11.586 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:11.586 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:32:11.586 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:32:11.586 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:32:11.586 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:32:11.586 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:32:11.586 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:32:11.586 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:11.586 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:11.586 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:11.586 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:11.586 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:11.586 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:11.586 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:11.586 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:11.586 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:11.586 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:11.586 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:11.586 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:11.586 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:11.587 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:11.587 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:11.587 Found net devices under 0000:af:00.0: cvl_0_0 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:11.587 Found net devices under 0000:af:00.1: cvl_0_1 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:11.587 20:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:11.587 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:11.587 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:11.587 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:11.587 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:11.587 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:11.587 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:11.587 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:11.587 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:11.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:11.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:32:11.587 00:32:11.587 --- 10.0.0.2 ping statistics --- 00:32:11.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:11.587 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:32:11.587 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:11.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:11.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:32:11.587 00:32:11.587 --- 10.0.0.1 ping statistics --- 00:32:11.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:11.587 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:32:11.587 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:11.587 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:32:11.587 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:11.587 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:11.587 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:11.587 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:11.587 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:11.587 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:11.587 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:11.587 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:32:11.587 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:11.587 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:11.587 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:11.587 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=572142 00:32:11.587 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 572142 00:32:11.588 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:11.588 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 572142 ']' 00:32:11.588 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:11.588 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:11.588 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:11.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:11.588 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:11.588 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:11.588 [2024-12-05 20:52:04.270746] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:11.588 [2024-12-05 20:52:04.271622] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:32:11.588 [2024-12-05 20:52:04.271653] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:11.588 [2024-12-05 20:52:04.345064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.588 [2024-12-05 20:52:04.383587] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:11.588 [2024-12-05 20:52:04.383619] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:11.588 [2024-12-05 20:52:04.383626] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:11.588 [2024-12-05 20:52:04.383632] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:11.588 [2024-12-05 20:52:04.383637] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:11.588 [2024-12-05 20:52:04.384158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:11.588 [2024-12-05 20:52:04.451401] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:11.588 [2024-12-05 20:52:04.451587] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:11.588 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:11.588 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:32:11.588 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:11.588 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:11.588 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:11.588 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:11.588 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:11.588 [2024-12-05 20:52:04.680793] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:11.588 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:32:11.588 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:11.588 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:11.588 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:11.588 ************************************ 00:32:11.588 START TEST lvs_grow_clean 00:32:11.588 ************************************ 00:32:11.588 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:32:11.588 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:11.588 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:11.588 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:11.588 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:11.588 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:11.588 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:11.588 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:11.588 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:11.588 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:11.588 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:11.588 20:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:11.846 20:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1c4521a2-174b-4c85-a96c-5207ea1526de 00:32:11.846 20:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1c4521a2-174b-4c85-a96c-5207ea1526de 00:32:11.846 20:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:12.105 20:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:12.105 20:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:12.105 20:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1c4521a2-174b-4c85-a96c-5207ea1526de lvol 150 00:32:12.105 20:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f58a4285-ee2f-45ac-b21d-1e6aba8cd0a2 00:32:12.105 20:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:12.105 20:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:12.363 [2024-12-05 20:52:05.668536] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:12.363 [2024-12-05 20:52:05.668663] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:12.363 true 00:32:12.363 20:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1c4521a2-174b-4c85-a96c-5207ea1526de 00:32:12.363 20:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:12.622 20:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:12.622 20:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:12.622 20:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f58a4285-ee2f-45ac-b21d-1e6aba8cd0a2 00:32:12.881 20:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:13.140 [2024-12-05 20:52:06.365001] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:13.140 20:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:13.140 20:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=572703 00:32:13.140 20:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:13.140 20:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:13.399 20:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 572703 /var/tmp/bdevperf.sock 00:32:13.399 20:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 572703 ']' 00:32:13.399 20:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:13.399 20:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:13.399 20:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:13.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:13.399 20:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:13.399 20:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:13.399 [2024-12-05 20:52:06.623193] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:32:13.399 [2024-12-05 20:52:06.623241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid572703 ] 00:32:13.399 [2024-12-05 20:52:06.695391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.399 [2024-12-05 20:52:06.734622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:14.348 20:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:14.348 20:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:32:14.348 20:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:14.607 Nvme0n1 00:32:14.607 20:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:14.607 [ 00:32:14.607 { 00:32:14.607 "name": "Nvme0n1", 00:32:14.607 "aliases": [ 00:32:14.607 "f58a4285-ee2f-45ac-b21d-1e6aba8cd0a2" 00:32:14.607 ], 00:32:14.607 "product_name": "NVMe disk", 00:32:14.607 "block_size": 4096, 00:32:14.607 "num_blocks": 38912, 00:32:14.607 "uuid": "f58a4285-ee2f-45ac-b21d-1e6aba8cd0a2", 00:32:14.607 "numa_id": 1, 00:32:14.607 "assigned_rate_limits": { 00:32:14.607 "rw_ios_per_sec": 0, 00:32:14.607 "rw_mbytes_per_sec": 0, 00:32:14.607 "r_mbytes_per_sec": 0, 00:32:14.607 "w_mbytes_per_sec": 0 00:32:14.607 }, 00:32:14.607 "claimed": false, 00:32:14.607 "zoned": false, 00:32:14.607 "supported_io_types": { 00:32:14.607 "read": true, 00:32:14.607 "write": true, 00:32:14.607 "unmap": true, 00:32:14.607 "flush": true, 00:32:14.607 "reset": true, 00:32:14.607 "nvme_admin": true, 00:32:14.607 "nvme_io": true, 00:32:14.607 "nvme_io_md": false, 00:32:14.607 "write_zeroes": true, 00:32:14.607 "zcopy": false, 00:32:14.607 "get_zone_info": false, 00:32:14.607 "zone_management": false, 00:32:14.607 "zone_append": false, 00:32:14.607 "compare": true, 00:32:14.607 "compare_and_write": true, 00:32:14.607 "abort": true, 00:32:14.607 "seek_hole": false, 00:32:14.607 "seek_data": false, 00:32:14.607 "copy": true, 00:32:14.607 "nvme_iov_md": false 00:32:14.607 }, 00:32:14.607 "memory_domains": [ 00:32:14.607 { 00:32:14.607 "dma_device_id": "system", 00:32:14.607 "dma_device_type": 1 00:32:14.607 } 00:32:14.607 ], 00:32:14.607 "driver_specific": { 00:32:14.607 "nvme": [ 00:32:14.607 { 00:32:14.607 "trid": { 00:32:14.607 "trtype": "TCP", 00:32:14.607 "adrfam": "IPv4", 00:32:14.607 "traddr": "10.0.0.2", 00:32:14.607 "trsvcid": "4420", 00:32:14.607 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:14.607 }, 00:32:14.607 "ctrlr_data": { 00:32:14.607 "cntlid": 1, 00:32:14.607 "vendor_id": "0x8086", 00:32:14.607 "model_number": "SPDK bdev Controller", 00:32:14.607 "serial_number": "SPDK0", 00:32:14.607 "firmware_revision": "25.01", 00:32:14.607 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:14.607 "oacs": { 00:32:14.607 "security": 0, 00:32:14.607 "format": 0, 00:32:14.607 "firmware": 0, 00:32:14.607 "ns_manage": 0 00:32:14.607 }, 00:32:14.607 "multi_ctrlr": true, 00:32:14.607 "ana_reporting": false 00:32:14.607 }, 00:32:14.607 "vs": { 00:32:14.607 "nvme_version": "1.3" 00:32:14.607 }, 00:32:14.607 "ns_data": { 00:32:14.607 "id": 1, 00:32:14.607 "can_share": true 00:32:14.607 } 00:32:14.607 } 00:32:14.607 ], 00:32:14.607 "mp_policy": "active_passive" 00:32:14.607 } 00:32:14.607 } 00:32:14.607 ] 00:32:14.607 20:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=572967 00:32:14.607 20:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:14.607 20:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:14.866 Running I/O for 10 seconds... 00:32:15.802 Latency(us) 00:32:15.802 [2024-12-05T19:52:09.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:15.802 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:15.802 Nvme0n1 : 1.00 24765.00 96.74 0.00 0.00 0.00 0.00 0.00 00:32:15.802 [2024-12-05T19:52:09.243Z] =================================================================================================================== 00:32:15.802 [2024-12-05T19:52:09.243Z] Total : 24765.00 96.74 0.00 0.00 0.00 0.00 0.00 00:32:15.802 00:32:16.739 20:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1c4521a2-174b-4c85-a96c-5207ea1526de 00:32:16.739 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:16.739 Nvme0n1 : 2.00 25082.50 97.98 0.00 0.00 0.00 0.00 0.00 00:32:16.739 [2024-12-05T19:52:10.180Z] =================================================================================================================== 00:32:16.739 [2024-12-05T19:52:10.180Z] Total : 25082.50 97.98 0.00 0.00 0.00 0.00 0.00 00:32:16.739 00:32:16.998 true 00:32:16.998 20:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1c4521a2-174b-4c85-a96c-5207ea1526de 00:32:16.998 20:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:16.998 20:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:16.998 20:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:16.998 20:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 572967 00:32:17.935 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:17.935 Nvme0n1 : 3.00 25103.67 98.06 0.00 0.00 0.00 0.00 0.00 00:32:17.935 [2024-12-05T19:52:11.376Z] =================================================================================================================== 00:32:17.935 [2024-12-05T19:52:11.376Z] Total : 25103.67 98.06 0.00 0.00 0.00 0.00 0.00 00:32:17.935 00:32:18.873 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:18.873 Nvme0n1 : 4.00 25241.25 98.60 0.00 0.00 0.00 0.00 0.00 00:32:18.873 [2024-12-05T19:52:12.314Z] =================================================================================================================== 00:32:18.873 [2024-12-05T19:52:12.314Z] Total : 25241.25 98.60 0.00 0.00 0.00 0.00 0.00 00:32:18.873 00:32:19.808 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:19.808 Nvme0n1 : 5.00 25298.40 98.82 0.00 0.00 0.00 0.00 0.00 00:32:19.808 [2024-12-05T19:52:13.249Z] =================================================================================================================== 00:32:19.808 [2024-12-05T19:52:13.250Z] Total : 25298.40 98.82 0.00 0.00 0.00 0.00 0.00 00:32:19.809 00:32:20.746 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:20.746 Nvme0n1 : 6.00 25357.67 99.05 0.00 0.00 0.00 0.00 0.00 00:32:20.746 [2024-12-05T19:52:14.187Z] =================================================================================================================== 00:32:20.746 [2024-12-05T19:52:14.187Z] Total : 25357.67 99.05 0.00 0.00 0.00 0.00 0.00 00:32:20.746 00:32:21.683 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:21.683 Nvme0n1 : 7.00 25400.00 99.22 0.00 0.00 0.00 0.00 0.00 00:32:21.683 [2024-12-05T19:52:15.124Z] =================================================================================================================== 00:32:21.683 [2024-12-05T19:52:15.124Z] Total : 25400.00 99.22 0.00 0.00 0.00 0.00 0.00 00:32:21.683 00:32:23.057 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:23.057 Nvme0n1 : 8.00 25447.62 99.40 0.00 0.00 0.00 0.00 0.00 00:32:23.057 [2024-12-05T19:52:16.498Z] =================================================================================================================== 00:32:23.057 [2024-12-05T19:52:16.499Z] Total : 25447.62 99.40 0.00 0.00 0.00 0.00 0.00 00:32:23.058 00:32:23.994 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:23.994 Nvme0n1 : 9.00 25470.56 99.49 0.00 0.00 0.00 0.00 0.00 00:32:23.994 [2024-12-05T19:52:17.435Z] =================================================================================================================== 00:32:23.994 [2024-12-05T19:52:17.435Z] Total : 25470.56 99.49 0.00 0.00 0.00 0.00 0.00 00:32:23.994 00:32:24.931 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:24.931 Nvme0n1 : 10.00 25501.60 99.62 0.00 0.00 0.00 0.00 0.00 00:32:24.931 [2024-12-05T19:52:18.372Z] =================================================================================================================== 00:32:24.931 [2024-12-05T19:52:18.372Z] Total : 25501.60 99.62 0.00 0.00 0.00 0.00 0.00 00:32:24.931 00:32:24.931 00:32:24.931 Latency(us) 00:32:24.931 [2024-12-05T19:52:18.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:24.931 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:24.931 Nvme0n1 : 10.00 25504.85 99.63 0.00 0.00 5015.96 4527.94 25022.84 00:32:24.931 [2024-12-05T19:52:18.372Z] =================================================================================================================== 00:32:24.931 [2024-12-05T19:52:18.372Z] Total : 25504.85 99.63 0.00 0.00 5015.96 4527.94 25022.84 00:32:24.931 { 00:32:24.931 "results": [ 00:32:24.931 { 00:32:24.931 "job": "Nvme0n1", 00:32:24.931 "core_mask": "0x2", 00:32:24.931 "workload": "randwrite", 00:32:24.931 "status": "finished", 00:32:24.931 "queue_depth": 128, 00:32:24.931 "io_size": 4096, 00:32:24.931 "runtime": 10.003744, 00:32:24.931 "iops": 25504.850983791668, 00:32:24.931 "mibps": 99.6283241554362, 00:32:24.931 "io_failed": 0, 00:32:24.931 "io_timeout": 0, 00:32:24.931 "avg_latency_us": 5015.960948512497, 00:32:24.931 "min_latency_us": 4527.941818181818, 00:32:24.931 "max_latency_us": 25022.836363636365 00:32:24.931 } 00:32:24.931 ], 00:32:24.931 "core_count": 1 00:32:24.931 } 00:32:24.931 20:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 572703 00:32:24.931 20:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 572703 ']' 00:32:24.931 20:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 572703 00:32:24.931 20:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:32:24.931 20:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:24.931 20:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 572703 00:32:24.931 20:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:24.931 20:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:24.931 20:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 572703' 00:32:24.931 killing process with pid 572703 00:32:24.931 20:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 572703 00:32:24.931 Received shutdown signal, test time was about 10.000000 seconds 00:32:24.931 00:32:24.931 Latency(us) 00:32:24.931 [2024-12-05T19:52:18.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:24.931 [2024-12-05T19:52:18.372Z] =================================================================================================================== 00:32:24.931 [2024-12-05T19:52:18.372Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:24.932 20:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 572703 00:32:24.932 20:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:25.191 20:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:25.449 20:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1c4521a2-174b-4c85-a96c-5207ea1526de 00:32:25.449 20:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:25.708 20:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:25.708 20:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:25.708 20:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:25.708 [2024-12-05 20:52:19.064592] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:25.708 20:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1c4521a2-174b-4c85-a96c-5207ea1526de 00:32:25.708 20:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:32:25.708 20:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1c4521a2-174b-4c85-a96c-5207ea1526de 00:32:25.708 20:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:25.708 20:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:25.708 20:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:25.708 20:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:25.708 20:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:25.708 20:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:25.708 20:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:25.708 20:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:25.708 20:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1c4521a2-174b-4c85-a96c-5207ea1526de 00:32:25.966 request: 00:32:25.966 { 00:32:25.966 "uuid": "1c4521a2-174b-4c85-a96c-5207ea1526de", 00:32:25.966 "method": "bdev_lvol_get_lvstores", 00:32:25.966 "req_id": 1 00:32:25.966 } 00:32:25.966 Got JSON-RPC error response 00:32:25.966 response: 00:32:25.966 { 00:32:25.966 "code": -19, 00:32:25.966 "message": "No such device" 00:32:25.966 } 00:32:25.966 20:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:32:25.966 20:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:25.966 20:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:25.966 20:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:25.966 20:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:26.225 aio_bdev 00:32:26.225 20:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f58a4285-ee2f-45ac-b21d-1e6aba8cd0a2 00:32:26.225 20:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=f58a4285-ee2f-45ac-b21d-1e6aba8cd0a2 00:32:26.225 20:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:26.225 20:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:32:26.225 20:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:26.225 20:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:26.225 20:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:26.225 20:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f58a4285-ee2f-45ac-b21d-1e6aba8cd0a2 -t 2000 00:32:26.483 [ 00:32:26.483 { 00:32:26.483 "name": "f58a4285-ee2f-45ac-b21d-1e6aba8cd0a2", 00:32:26.483 "aliases": [ 00:32:26.483 "lvs/lvol" 00:32:26.483 ], 00:32:26.483 "product_name": "Logical Volume", 00:32:26.483 "block_size": 4096, 00:32:26.483 "num_blocks": 38912, 00:32:26.483 "uuid": "f58a4285-ee2f-45ac-b21d-1e6aba8cd0a2", 00:32:26.483 "assigned_rate_limits": { 00:32:26.483 "rw_ios_per_sec": 0, 00:32:26.483 "rw_mbytes_per_sec": 0, 00:32:26.483 "r_mbytes_per_sec": 0, 00:32:26.483 "w_mbytes_per_sec": 0 00:32:26.483 }, 00:32:26.483 "claimed": false, 00:32:26.483 "zoned": false, 00:32:26.483 "supported_io_types": { 00:32:26.483 "read": true, 00:32:26.483 "write": true, 00:32:26.483 "unmap": true, 00:32:26.483 "flush": false, 00:32:26.483 "reset": true, 00:32:26.483 "nvme_admin": false, 00:32:26.483 "nvme_io": false, 00:32:26.483 "nvme_io_md": false, 00:32:26.483 "write_zeroes": true, 00:32:26.483 "zcopy": false, 00:32:26.483 "get_zone_info": false, 00:32:26.483 "zone_management": false, 00:32:26.483 "zone_append": false, 00:32:26.483 "compare": false, 00:32:26.483 "compare_and_write": false, 00:32:26.483 "abort": false, 00:32:26.483 "seek_hole": true, 00:32:26.483 "seek_data": true, 00:32:26.483 "copy": false, 00:32:26.483 "nvme_iov_md": false 00:32:26.483 }, 00:32:26.483 "driver_specific": { 00:32:26.483 "lvol": { 00:32:26.483 "lvol_store_uuid": "1c4521a2-174b-4c85-a96c-5207ea1526de", 00:32:26.483 "base_bdev": "aio_bdev", 00:32:26.484 "thin_provision": false, 00:32:26.484 "num_allocated_clusters": 38, 00:32:26.484 "snapshot": false, 00:32:26.484 "clone": false, 00:32:26.484 "esnap_clone": false 00:32:26.484 } 00:32:26.484 } 00:32:26.484 } 00:32:26.484 ] 00:32:26.484 20:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:32:26.484 20:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1c4521a2-174b-4c85-a96c-5207ea1526de 00:32:26.484 20:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:26.742 20:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:26.742 20:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1c4521a2-174b-4c85-a96c-5207ea1526de 00:32:26.742 20:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:27.000 20:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:27.000 20:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f58a4285-ee2f-45ac-b21d-1e6aba8cd0a2 00:32:27.000 20:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1c4521a2-174b-4c85-a96c-5207ea1526de 00:32:27.259 20:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:27.519 20:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:27.519 00:32:27.519 real 0m16.049s 00:32:27.519 user 0m15.720s 00:32:27.519 sys 0m1.471s 00:32:27.519 20:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:27.519 20:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:27.519 ************************************ 00:32:27.519 END TEST lvs_grow_clean 00:32:27.519 ************************************ 00:32:27.519 20:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:27.519 20:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:27.519 20:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:27.519 20:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:27.519 ************************************ 00:32:27.519 START TEST lvs_grow_dirty 00:32:27.519 ************************************ 00:32:27.519 20:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:32:27.519 20:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:27.519 20:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:27.519 20:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:27.520 20:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:27.520 20:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:27.520 20:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:27.520 20:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:27.520 20:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:27.520 20:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:27.780 20:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:27.780 20:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:28.039 20:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=d77797c5-3553-4c13-811a-83acd3c14b99 00:32:28.039 20:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d77797c5-3553-4c13-811a-83acd3c14b99 00:32:28.039 20:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:28.039 20:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:28.039 20:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:28.039 20:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d77797c5-3553-4c13-811a-83acd3c14b99 lvol 150 00:32:28.298 20:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=5f944e37-e180-41a4-b74f-542aceabc5b2 00:32:28.298 20:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:28.298 20:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:28.558 [2024-12-05 20:52:21.788538] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:28.558 [2024-12-05 20:52:21.788667] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:28.558 true 00:32:28.558 20:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d77797c5-3553-4c13-811a-83acd3c14b99 00:32:28.558 20:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:28.558 20:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:28.558 20:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:28.817 20:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5f944e37-e180-41a4-b74f-542aceabc5b2 00:32:29.078 20:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:29.078 [2024-12-05 20:52:22.476954] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:29.078 20:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:29.337 20:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=575410 00:32:29.337 20:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:29.337 20:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:29.337 20:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 575410 /var/tmp/bdevperf.sock 00:32:29.337 20:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 575410 ']' 00:32:29.337 20:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:29.337 20:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:29.337 20:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:29.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:29.337 20:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:29.337 20:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:29.337 [2024-12-05 20:52:22.719771] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:32:29.337 [2024-12-05 20:52:22.719819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid575410 ] 00:32:29.597 [2024-12-05 20:52:22.792466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:29.597 [2024-12-05 20:52:22.831399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:29.597 20:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:29.597 20:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:29.597 20:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:30.165 Nvme0n1 00:32:30.165 20:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:30.165 [ 00:32:30.165 { 00:32:30.165 "name": "Nvme0n1", 00:32:30.165 "aliases": [ 00:32:30.165 "5f944e37-e180-41a4-b74f-542aceabc5b2" 00:32:30.165 ], 00:32:30.165 "product_name": "NVMe disk", 00:32:30.165 "block_size": 4096, 00:32:30.165 "num_blocks": 38912, 00:32:30.165 "uuid": "5f944e37-e180-41a4-b74f-542aceabc5b2", 00:32:30.165 "numa_id": 1, 00:32:30.165 "assigned_rate_limits": { 00:32:30.165 "rw_ios_per_sec": 0, 00:32:30.165 "rw_mbytes_per_sec": 0, 00:32:30.165 "r_mbytes_per_sec": 0, 00:32:30.165 "w_mbytes_per_sec": 0 00:32:30.165 }, 00:32:30.165 "claimed": false, 00:32:30.165 "zoned": false, 00:32:30.165 "supported_io_types": { 00:32:30.165 "read": true, 00:32:30.165 "write": true, 00:32:30.165 "unmap": true, 00:32:30.165 "flush": true, 00:32:30.165 "reset": true, 00:32:30.165 "nvme_admin": true, 00:32:30.165 "nvme_io": true, 00:32:30.165 "nvme_io_md": false, 00:32:30.165 "write_zeroes": true, 00:32:30.165 "zcopy": false, 00:32:30.165 "get_zone_info": false, 00:32:30.165 "zone_management": false, 00:32:30.165 "zone_append": false, 00:32:30.165 "compare": true, 00:32:30.165 "compare_and_write": true, 00:32:30.165 "abort": true, 00:32:30.165 "seek_hole": false, 00:32:30.165 "seek_data": false, 00:32:30.165 "copy": true, 00:32:30.165 "nvme_iov_md": false 00:32:30.165 }, 00:32:30.165 "memory_domains": [ 00:32:30.165 { 00:32:30.165 "dma_device_id": "system", 00:32:30.165 "dma_device_type": 1 00:32:30.165 } 00:32:30.165 ], 00:32:30.165 "driver_specific": { 00:32:30.165 "nvme": [ 00:32:30.165 { 00:32:30.165 "trid": { 00:32:30.165 "trtype": "TCP", 00:32:30.165 "adrfam": "IPv4", 00:32:30.165 "traddr": "10.0.0.2", 00:32:30.165 "trsvcid": "4420", 00:32:30.166 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:30.166 }, 00:32:30.166 "ctrlr_data": { 00:32:30.166 "cntlid": 1, 00:32:30.166 "vendor_id": "0x8086", 00:32:30.166 "model_number": "SPDK bdev Controller", 00:32:30.166 "serial_number": "SPDK0", 00:32:30.166 "firmware_revision": "25.01", 00:32:30.166 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:30.166 "oacs": { 00:32:30.166 "security": 0, 00:32:30.166 "format": 0, 00:32:30.166 "firmware": 0, 00:32:30.166 "ns_manage": 0 00:32:30.166 }, 00:32:30.166 "multi_ctrlr": true, 00:32:30.166 "ana_reporting": false 00:32:30.166 }, 00:32:30.166 "vs": { 00:32:30.166 "nvme_version": "1.3" 00:32:30.166 }, 00:32:30.166 "ns_data": { 00:32:30.166 "id": 1, 00:32:30.166 "can_share": true 00:32:30.166 } 00:32:30.166 } 00:32:30.166 ], 00:32:30.166 "mp_policy": "active_passive" 00:32:30.166 } 00:32:30.166 } 00:32:30.166 ] 00:32:30.166 20:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:30.166 20:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=575647 00:32:30.166 20:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:30.166 Running I/O for 10 seconds... 00:32:31.103 Latency(us) 00:32:31.103 [2024-12-05T19:52:24.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:31.103 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:31.103 Nvme0n1 : 1.00 24765.00 96.74 0.00 0.00 0.00 0.00 0.00 00:32:31.103 [2024-12-05T19:52:24.544Z] =================================================================================================================== 00:32:31.103 [2024-12-05T19:52:24.544Z] Total : 24765.00 96.74 0.00 0.00 0.00 0.00 0.00 00:32:31.103 00:32:32.041 20:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d77797c5-3553-4c13-811a-83acd3c14b99 00:32:32.300 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:32.300 Nvme0n1 : 2.00 25209.50 98.47 0.00 0.00 0.00 0.00 0.00 00:32:32.300 [2024-12-05T19:52:25.741Z] =================================================================================================================== 00:32:32.300 [2024-12-05T19:52:25.741Z] Total : 25209.50 98.47 0.00 0.00 0.00 0.00 0.00 00:32:32.300 00:32:32.300 true 00:32:32.300 20:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d77797c5-3553-4c13-811a-83acd3c14b99 00:32:32.300 20:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:32.559 20:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:32.559 20:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:32.559 20:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 575647 00:32:33.127 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:33.127 Nvme0n1 : 3.00 25315.33 98.89 0.00 0.00 0.00 0.00 0.00 00:32:33.127 [2024-12-05T19:52:26.568Z] =================================================================================================================== 00:32:33.127 [2024-12-05T19:52:26.568Z] Total : 25315.33 98.89 0.00 0.00 0.00 0.00 0.00 00:32:33.127 00:32:34.505 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:34.505 Nvme0n1 : 4.00 25400.00 99.22 0.00 0.00 0.00 0.00 0.00 00:32:34.505 [2024-12-05T19:52:27.946Z] =================================================================================================================== 00:32:34.505 [2024-12-05T19:52:27.947Z] Total : 25400.00 99.22 0.00 0.00 0.00 0.00 0.00 00:32:34.506 00:32:35.442 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:35.442 Nvme0n1 : 5.00 25476.20 99.52 0.00 0.00 0.00 0.00 0.00 00:32:35.442 [2024-12-05T19:52:28.883Z] =================================================================================================================== 00:32:35.442 [2024-12-05T19:52:28.883Z] Total : 25476.20 99.52 0.00 0.00 0.00 0.00 0.00 00:32:35.442 00:32:36.378 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:36.378 Nvme0n1 : 6.00 25516.50 99.67 0.00 0.00 0.00 0.00 0.00 00:32:36.378 [2024-12-05T19:52:29.819Z] =================================================================================================================== 00:32:36.378 [2024-12-05T19:52:29.819Z] Total : 25516.50 99.67 0.00 0.00 0.00 0.00 0.00 00:32:36.378 00:32:37.314 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:37.314 Nvme0n1 : 7.00 25490.86 99.57 0.00 0.00 0.00 0.00 0.00 00:32:37.314 [2024-12-05T19:52:30.755Z] =================================================================================================================== 00:32:37.314 [2024-12-05T19:52:30.755Z] Total : 25490.86 99.57 0.00 0.00 0.00 0.00 0.00 00:32:37.314 00:32:38.250 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:38.250 Nvme0n1 : 8.00 25529.50 99.72 0.00 0.00 0.00 0.00 0.00 00:32:38.250 [2024-12-05T19:52:31.691Z] =================================================================================================================== 00:32:38.250 [2024-12-05T19:52:31.691Z] Total : 25529.50 99.72 0.00 0.00 0.00 0.00 0.00 00:32:38.250 00:32:39.187 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:39.187 Nvme0n1 : 9.00 25557.44 99.83 0.00 0.00 0.00 0.00 0.00 00:32:39.187 [2024-12-05T19:52:32.628Z] =================================================================================================================== 00:32:39.187 [2024-12-05T19:52:32.628Z] Total : 25557.44 99.83 0.00 0.00 0.00 0.00 0.00 00:32:39.187 00:32:40.140 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:40.140 Nvme0n1 : 10.00 25579.80 99.92 0.00 0.00 0.00 0.00 0.00 00:32:40.140 [2024-12-05T19:52:33.581Z] =================================================================================================================== 00:32:40.140 [2024-12-05T19:52:33.581Z] Total : 25579.80 99.92 0.00 0.00 0.00 0.00 0.00 00:32:40.140 00:32:40.140 00:32:40.140 Latency(us) 00:32:40.140 [2024-12-05T19:52:33.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:40.140 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:40.140 Nvme0n1 : 10.00 25582.57 99.93 0.00 0.00 5000.88 2874.65 26929.34 00:32:40.140 [2024-12-05T19:52:33.581Z] =================================================================================================================== 00:32:40.140 [2024-12-05T19:52:33.581Z] Total : 25582.57 99.93 0.00 0.00 5000.88 2874.65 26929.34 00:32:40.140 { 00:32:40.140 "results": [ 00:32:40.140 { 00:32:40.140 "job": "Nvme0n1", 00:32:40.140 "core_mask": "0x2", 00:32:40.140 "workload": "randwrite", 00:32:40.140 "status": "finished", 00:32:40.140 "queue_depth": 128, 00:32:40.140 "io_size": 4096, 00:32:40.140 "runtime": 10.00392, 00:32:40.140 "iops": 25582.571631920287, 00:32:40.140 "mibps": 99.93192043718862, 00:32:40.140 "io_failed": 0, 00:32:40.140 "io_timeout": 0, 00:32:40.140 "avg_latency_us": 5000.876140304761, 00:32:40.140 "min_latency_us": 2874.6472727272726, 00:32:40.140 "max_latency_us": 26929.33818181818 00:32:40.140 } 00:32:40.140 ], 00:32:40.140 "core_count": 1 00:32:40.140 } 00:32:40.399 20:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 575410 00:32:40.399 20:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 575410 ']' 00:32:40.399 20:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 575410 00:32:40.399 20:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:32:40.399 20:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:40.399 20:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 575410 00:32:40.399 20:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:40.399 20:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:40.399 20:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 575410' 00:32:40.399 killing process with pid 575410 00:32:40.399 20:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 575410 00:32:40.399 Received shutdown signal, test time was about 10.000000 seconds 00:32:40.399 00:32:40.399 Latency(us) 00:32:40.399 [2024-12-05T19:52:33.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:40.399 [2024-12-05T19:52:33.840Z] =================================================================================================================== 00:32:40.399 [2024-12-05T19:52:33.840Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:40.399 20:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 575410 00:32:40.399 20:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:40.658 20:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:40.918 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:40.918 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d77797c5-3553-4c13-811a-83acd3c14b99 00:32:40.918 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:40.918 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:32:40.918 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 572142 00:32:40.918 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 572142 00:32:41.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 572142 Killed "${NVMF_APP[@]}" "$@" 00:32:41.177 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:32:41.177 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:32:41.177 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:41.177 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:41.177 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:41.177 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=577496 00:32:41.177 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 577496 00:32:41.177 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:41.177 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 577496 ']' 00:32:41.177 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:41.177 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:41.177 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:41.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:41.177 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:41.177 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:41.177 [2024-12-05 20:52:34.424297] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:41.177 [2024-12-05 20:52:34.425178] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:32:41.177 [2024-12-05 20:52:34.425213] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:41.177 [2024-12-05 20:52:34.503112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.177 [2024-12-05 20:52:34.540804] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:41.177 [2024-12-05 20:52:34.540839] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:41.177 [2024-12-05 20:52:34.540845] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:41.177 [2024-12-05 20:52:34.540851] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:41.177 [2024-12-05 20:52:34.540856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:41.177 [2024-12-05 20:52:34.541409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:41.177 [2024-12-05 20:52:34.608124] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:41.177 [2024-12-05 20:52:34.608315] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:41.437 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:41.437 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:41.437 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:41.437 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:41.437 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:41.437 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:41.437 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:41.437 [2024-12-05 20:52:34.834789] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:41.437 [2024-12-05 20:52:34.834988] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:41.437 [2024-12-05 20:52:34.835086] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:41.437 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:32:41.437 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 5f944e37-e180-41a4-b74f-542aceabc5b2 00:32:41.437 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=5f944e37-e180-41a4-b74f-542aceabc5b2 00:32:41.437 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:41.437 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:32:41.437 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:41.437 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:41.437 20:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:41.696 20:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5f944e37-e180-41a4-b74f-542aceabc5b2 -t 2000 00:32:41.966 [ 00:32:41.966 { 00:32:41.966 "name": "5f944e37-e180-41a4-b74f-542aceabc5b2", 00:32:41.966 "aliases": [ 00:32:41.966 "lvs/lvol" 00:32:41.966 ], 00:32:41.966 "product_name": "Logical Volume", 00:32:41.966 "block_size": 4096, 00:32:41.966 "num_blocks": 38912, 00:32:41.966 "uuid": "5f944e37-e180-41a4-b74f-542aceabc5b2", 00:32:41.966 "assigned_rate_limits": { 00:32:41.966 "rw_ios_per_sec": 0, 00:32:41.966 "rw_mbytes_per_sec": 0, 00:32:41.966 "r_mbytes_per_sec": 0, 00:32:41.966 "w_mbytes_per_sec": 0 00:32:41.966 }, 00:32:41.966 "claimed": false, 00:32:41.966 "zoned": false, 00:32:41.966 "supported_io_types": { 00:32:41.966 "read": true, 00:32:41.966 "write": true, 00:32:41.966 "unmap": true, 00:32:41.966 "flush": false, 00:32:41.966 "reset": true, 00:32:41.966 "nvme_admin": false, 00:32:41.966 "nvme_io": false, 00:32:41.966 "nvme_io_md": false, 00:32:41.966 "write_zeroes": true, 00:32:41.966 "zcopy": false, 00:32:41.966 "get_zone_info": false, 00:32:41.966 "zone_management": false, 00:32:41.966 "zone_append": false, 00:32:41.967 "compare": false, 00:32:41.967 "compare_and_write": false, 00:32:41.967 "abort": false, 00:32:41.967 "seek_hole": true, 00:32:41.967 "seek_data": true, 00:32:41.967 "copy": false, 00:32:41.967 "nvme_iov_md": false 00:32:41.967 }, 00:32:41.967 "driver_specific": { 00:32:41.967 "lvol": { 00:32:41.967 "lvol_store_uuid": "d77797c5-3553-4c13-811a-83acd3c14b99", 00:32:41.967 "base_bdev": "aio_bdev", 00:32:41.967 "thin_provision": false, 00:32:41.967 "num_allocated_clusters": 38, 00:32:41.967 "snapshot": false, 00:32:41.967 "clone": false, 00:32:41.967 "esnap_clone": false 00:32:41.967 } 00:32:41.967 } 00:32:41.967 } 00:32:41.967 ] 00:32:41.967 20:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:32:41.967 20:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d77797c5-3553-4c13-811a-83acd3c14b99 00:32:41.967 20:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:41.967 20:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:41.967 20:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d77797c5-3553-4c13-811a-83acd3c14b99 00:32:41.967 20:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:42.228 20:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:42.228 20:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:42.486 [2024-12-05 20:52:35.737853] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:42.486 20:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d77797c5-3553-4c13-811a-83acd3c14b99 00:32:42.486 20:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:32:42.486 20:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d77797c5-3553-4c13-811a-83acd3c14b99 00:32:42.486 20:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:42.486 20:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:42.486 20:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:42.486 20:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:42.486 20:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:42.486 20:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:42.486 20:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:42.486 20:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:42.486 20:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d77797c5-3553-4c13-811a-83acd3c14b99 00:32:42.744 request: 00:32:42.744 { 00:32:42.744 "uuid": "d77797c5-3553-4c13-811a-83acd3c14b99", 00:32:42.744 "method": "bdev_lvol_get_lvstores", 00:32:42.744 "req_id": 1 00:32:42.744 } 00:32:42.744 Got JSON-RPC error response 00:32:42.744 response: 00:32:42.744 { 00:32:42.744 "code": -19, 00:32:42.744 "message": "No such device" 00:32:42.744 } 00:32:42.744 20:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:32:42.744 20:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:42.744 20:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:42.745 20:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:42.745 20:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:42.745 aio_bdev 00:32:42.745 20:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5f944e37-e180-41a4-b74f-542aceabc5b2 00:32:42.745 20:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=5f944e37-e180-41a4-b74f-542aceabc5b2 00:32:42.745 20:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:42.745 20:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:32:42.745 20:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:42.745 20:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:42.745 20:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:43.003 20:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5f944e37-e180-41a4-b74f-542aceabc5b2 -t 2000 00:32:43.263 [ 00:32:43.263 { 00:32:43.263 "name": "5f944e37-e180-41a4-b74f-542aceabc5b2", 00:32:43.263 "aliases": [ 00:32:43.263 "lvs/lvol" 00:32:43.263 ], 00:32:43.263 "product_name": "Logical Volume", 00:32:43.263 "block_size": 4096, 00:32:43.263 "num_blocks": 38912, 00:32:43.263 "uuid": "5f944e37-e180-41a4-b74f-542aceabc5b2", 00:32:43.263 "assigned_rate_limits": { 00:32:43.263 "rw_ios_per_sec": 0, 00:32:43.263 "rw_mbytes_per_sec": 0, 00:32:43.263 "r_mbytes_per_sec": 0, 00:32:43.263 "w_mbytes_per_sec": 0 00:32:43.263 }, 00:32:43.263 "claimed": false, 00:32:43.263 "zoned": false, 00:32:43.263 "supported_io_types": { 00:32:43.263 "read": true, 00:32:43.263 "write": true, 00:32:43.263 "unmap": true, 00:32:43.263 "flush": false, 00:32:43.263 "reset": true, 00:32:43.263 "nvme_admin": false, 00:32:43.263 "nvme_io": false, 00:32:43.263 "nvme_io_md": false, 00:32:43.263 "write_zeroes": true, 00:32:43.263 "zcopy": false, 00:32:43.263 "get_zone_info": false, 00:32:43.263 "zone_management": false, 00:32:43.263 "zone_append": false, 00:32:43.263 "compare": false, 00:32:43.263 "compare_and_write": false, 00:32:43.263 "abort": false, 00:32:43.263 "seek_hole": true, 00:32:43.263 "seek_data": true, 00:32:43.263 "copy": false, 00:32:43.263 "nvme_iov_md": false 00:32:43.263 }, 00:32:43.263 "driver_specific": { 00:32:43.263 "lvol": { 00:32:43.263 "lvol_store_uuid": "d77797c5-3553-4c13-811a-83acd3c14b99", 00:32:43.263 "base_bdev": "aio_bdev", 00:32:43.263 "thin_provision": false, 00:32:43.263 "num_allocated_clusters": 38, 00:32:43.263 "snapshot": false, 00:32:43.263 "clone": false, 00:32:43.263 "esnap_clone": false 00:32:43.263 } 00:32:43.263 } 00:32:43.263 } 00:32:43.263 ] 00:32:43.263 20:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:32:43.263 20:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d77797c5-3553-4c13-811a-83acd3c14b99 00:32:43.263 20:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:43.263 20:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:43.263 20:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d77797c5-3553-4c13-811a-83acd3c14b99 00:32:43.263 20:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:43.522 20:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:43.522 20:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5f944e37-e180-41a4-b74f-542aceabc5b2 00:32:43.781 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d77797c5-3553-4c13-811a-83acd3c14b99 00:32:44.040 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:44.040 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:44.040 00:32:44.040 real 0m16.569s 00:32:44.040 user 0m33.852s 00:32:44.040 sys 0m3.855s 00:32:44.040 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:44.040 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:44.040 ************************************ 00:32:44.040 END TEST lvs_grow_dirty 00:32:44.040 ************************************ 00:32:44.040 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:44.299 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:32:44.299 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:32:44.299 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:32:44.299 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:44.299 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:32:44.299 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:32:44.299 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:32:44.299 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:44.299 nvmf_trace.0 00:32:44.299 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:32:44.299 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:44.299 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:44.299 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:44.299 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:44.299 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:44.299 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:44.299 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:44.299 rmmod nvme_tcp 00:32:44.299 rmmod nvme_fabrics 00:32:44.299 rmmod nvme_keyring 00:32:44.300 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:44.300 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:44.300 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:44.300 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 577496 ']' 00:32:44.300 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 577496 00:32:44.300 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 577496 ']' 00:32:44.300 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 577496 00:32:44.300 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:32:44.300 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:44.300 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 577496 00:32:44.300 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:44.300 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:44.300 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 577496' 00:32:44.300 killing process with pid 577496 00:32:44.300 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 577496 00:32:44.300 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 577496 00:32:44.559 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:44.559 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:44.559 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:44.559 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:44.559 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:32:44.559 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:44.559 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:32:44.559 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:44.559 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:44.559 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:44.559 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:44.559 20:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:46.465 20:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:46.465 00:32:46.465 real 0m41.849s 00:32:46.465 user 0m52.117s 00:32:46.465 sys 0m10.203s 00:32:46.465 20:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:46.465 20:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:46.465 ************************************ 00:32:46.465 END TEST nvmf_lvs_grow 00:32:46.465 ************************************ 00:32:46.724 20:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:46.724 20:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:46.724 20:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:46.724 20:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:46.724 ************************************ 00:32:46.724 START TEST nvmf_bdev_io_wait 00:32:46.724 ************************************ 00:32:46.724 20:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:46.724 * Looking for test storage... 00:32:46.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:46.724 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:46.724 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:32:46.724 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:46.724 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:46.724 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:46.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.725 --rc genhtml_branch_coverage=1 00:32:46.725 --rc genhtml_function_coverage=1 00:32:46.725 --rc genhtml_legend=1 00:32:46.725 --rc geninfo_all_blocks=1 00:32:46.725 --rc geninfo_unexecuted_blocks=1 00:32:46.725 00:32:46.725 ' 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:46.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.725 --rc genhtml_branch_coverage=1 00:32:46.725 --rc genhtml_function_coverage=1 00:32:46.725 --rc genhtml_legend=1 00:32:46.725 --rc geninfo_all_blocks=1 00:32:46.725 --rc geninfo_unexecuted_blocks=1 00:32:46.725 00:32:46.725 ' 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:46.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.725 --rc genhtml_branch_coverage=1 00:32:46.725 --rc genhtml_function_coverage=1 00:32:46.725 --rc genhtml_legend=1 00:32:46.725 --rc geninfo_all_blocks=1 00:32:46.725 --rc geninfo_unexecuted_blocks=1 00:32:46.725 00:32:46.725 ' 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:46.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.725 --rc genhtml_branch_coverage=1 00:32:46.725 --rc genhtml_function_coverage=1 00:32:46.725 --rc genhtml_legend=1 00:32:46.725 --rc geninfo_all_blocks=1 00:32:46.725 --rc geninfo_unexecuted_blocks=1 00:32:46.725 00:32:46.725 ' 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:46.725 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:32:46.984 20:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:53.549 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:53.549 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:32:53.549 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:53.549 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:53.549 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:53.549 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:53.549 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:53.549 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:32:53.549 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:53.549 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:32:53.549 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:32:53.549 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:32:53.549 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:53.550 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:53.550 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:53.550 Found net devices under 0000:af:00.0: cvl_0_0 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:53.550 Found net devices under 0000:af:00.1: cvl_0_1 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:53.550 20:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:53.550 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:53.550 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:53.550 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:53.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:53.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:32:53.550 00:32:53.550 --- 10.0.0.2 ping statistics --- 00:32:53.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:53.550 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:53.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:53.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:32:53.551 00:32:53.551 --- 10.0.0.1 ping statistics --- 00:32:53.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:53.551 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=581797 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 581797 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 581797 ']' 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:53.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:53.551 [2024-12-05 20:52:46.127211] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:53.551 [2024-12-05 20:52:46.128082] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:32:53.551 [2024-12-05 20:52:46.128114] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:53.551 [2024-12-05 20:52:46.202215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:53.551 [2024-12-05 20:52:46.245540] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:53.551 [2024-12-05 20:52:46.245575] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:53.551 [2024-12-05 20:52:46.245581] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:53.551 [2024-12-05 20:52:46.245587] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:53.551 [2024-12-05 20:52:46.245592] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:53.551 [2024-12-05 20:52:46.246964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:53.551 [2024-12-05 20:52:46.247090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:53.551 [2024-12-05 20:52:46.247205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:53.551 [2024-12-05 20:52:46.247206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:53.551 [2024-12-05 20:52:46.247444] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.551 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:53.811 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.811 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:32:53.811 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.811 20:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:53.811 [2024-12-05 20:52:47.051179] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:53.811 [2024-12-05 20:52:47.051279] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:53.811 [2024-12-05 20:52:47.051603] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:53.811 [2024-12-05 20:52:47.051845] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:53.811 [2024-12-05 20:52:47.063869] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:53.811 Malloc0 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:53.811 [2024-12-05 20:52:47.140180] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=581849 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=581851 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:53.811 { 00:32:53.811 "params": { 00:32:53.811 "name": "Nvme$subsystem", 00:32:53.811 "trtype": "$TEST_TRANSPORT", 00:32:53.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:53.811 "adrfam": "ipv4", 00:32:53.811 "trsvcid": "$NVMF_PORT", 00:32:53.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:53.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:53.811 "hdgst": ${hdgst:-false}, 00:32:53.811 "ddgst": ${ddgst:-false} 00:32:53.811 }, 00:32:53.811 "method": "bdev_nvme_attach_controller" 00:32:53.811 } 00:32:53.811 EOF 00:32:53.811 )") 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=581853 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=581856 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:53.811 { 00:32:53.811 "params": { 00:32:53.811 "name": "Nvme$subsystem", 00:32:53.811 "trtype": "$TEST_TRANSPORT", 00:32:53.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:53.811 "adrfam": "ipv4", 00:32:53.811 "trsvcid": "$NVMF_PORT", 00:32:53.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:53.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:53.811 "hdgst": ${hdgst:-false}, 00:32:53.811 "ddgst": ${ddgst:-false} 00:32:53.811 }, 00:32:53.811 "method": "bdev_nvme_attach_controller" 00:32:53.811 } 00:32:53.811 EOF 00:32:53.811 )") 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:53.811 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:53.812 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:53.812 { 00:32:53.812 "params": { 00:32:53.812 "name": "Nvme$subsystem", 00:32:53.812 "trtype": "$TEST_TRANSPORT", 00:32:53.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:53.812 "adrfam": "ipv4", 00:32:53.812 "trsvcid": "$NVMF_PORT", 00:32:53.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:53.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:53.812 "hdgst": ${hdgst:-false}, 00:32:53.812 "ddgst": ${ddgst:-false} 00:32:53.812 }, 00:32:53.812 "method": "bdev_nvme_attach_controller" 00:32:53.812 } 00:32:53.812 EOF 00:32:53.812 )") 00:32:53.812 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:32:53.812 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:32:53.812 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:53.812 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:53.812 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:53.812 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:53.812 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:53.812 { 00:32:53.812 "params": { 00:32:53.812 "name": "Nvme$subsystem", 00:32:53.812 "trtype": "$TEST_TRANSPORT", 00:32:53.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:53.812 "adrfam": "ipv4", 00:32:53.812 "trsvcid": "$NVMF_PORT", 00:32:53.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:53.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:53.812 "hdgst": ${hdgst:-false}, 00:32:53.812 "ddgst": ${ddgst:-false} 00:32:53.812 }, 00:32:53.812 "method": "bdev_nvme_attach_controller" 00:32:53.812 } 00:32:53.812 EOF 00:32:53.812 )") 00:32:53.812 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:53.812 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 581849 00:32:53.812 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:53.812 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:53.812 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:53.812 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:53.812 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:53.812 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:53.812 "params": { 00:32:53.812 "name": "Nvme1", 00:32:53.812 "trtype": "tcp", 00:32:53.812 "traddr": "10.0.0.2", 00:32:53.812 "adrfam": "ipv4", 00:32:53.812 "trsvcid": "4420", 00:32:53.812 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:53.812 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:53.812 "hdgst": false, 00:32:53.812 "ddgst": false 00:32:53.812 }, 00:32:53.812 "method": "bdev_nvme_attach_controller" 00:32:53.812 }' 00:32:53.812 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:53.812 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:53.812 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:53.812 "params": { 00:32:53.812 "name": "Nvme1", 00:32:53.812 "trtype": "tcp", 00:32:53.812 "traddr": "10.0.0.2", 00:32:53.812 "adrfam": "ipv4", 00:32:53.812 "trsvcid": "4420", 00:32:53.812 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:53.812 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:53.812 "hdgst": false, 00:32:53.812 "ddgst": false 00:32:53.812 }, 00:32:53.812 "method": "bdev_nvme_attach_controller" 00:32:53.812 }' 00:32:53.812 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:53.812 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:53.812 "params": { 00:32:53.812 "name": "Nvme1", 00:32:53.812 "trtype": "tcp", 00:32:53.812 "traddr": "10.0.0.2", 00:32:53.812 "adrfam": "ipv4", 00:32:53.812 "trsvcid": "4420", 00:32:53.812 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:53.812 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:53.812 "hdgst": false, 00:32:53.812 "ddgst": false 00:32:53.812 }, 00:32:53.812 "method": "bdev_nvme_attach_controller" 00:32:53.812 }' 00:32:53.812 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:53.812 20:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:53.812 "params": { 00:32:53.812 "name": "Nvme1", 00:32:53.812 "trtype": "tcp", 00:32:53.812 "traddr": "10.0.0.2", 00:32:53.812 "adrfam": "ipv4", 00:32:53.812 "trsvcid": "4420", 00:32:53.812 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:53.812 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:53.812 "hdgst": false, 00:32:53.812 "ddgst": false 00:32:53.812 }, 00:32:53.812 "method": "bdev_nvme_attach_controller" 00:32:53.812 }' 00:32:53.812 [2024-12-05 20:52:47.192055] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:32:53.812 [2024-12-05 20:52:47.192112] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:32:53.812 [2024-12-05 20:52:47.193307] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:32:53.812 [2024-12-05 20:52:47.193350] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:32:53.812 [2024-12-05 20:52:47.195701] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:32:53.812 [2024-12-05 20:52:47.195701] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:32:53.812 [2024-12-05 20:52:47.195756] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-05 20:52:47.195755] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:32:53.812 --proc-type=auto ] 00:32:54.072 [2024-12-05 20:52:47.378933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:54.072 [2024-12-05 20:52:47.419231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:54.072 [2024-12-05 20:52:47.451156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:54.072 [2024-12-05 20:52:47.497903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:54.331 [2024-12-05 20:52:47.520896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:54.331 [2024-12-05 20:52:47.550673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:54.331 [2024-12-05 20:52:47.561087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:54.331 [2024-12-05 20:52:47.591330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:54.331 Running I/O for 1 seconds... 00:32:54.331 Running I/O for 1 seconds... 00:32:54.590 Running I/O for 1 seconds... 00:32:54.590 Running I/O for 1 seconds... 00:32:55.533 265624.00 IOPS, 1037.59 MiB/s 00:32:55.533 Latency(us) 00:32:55.533 [2024-12-05T19:52:48.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:55.533 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:32:55.533 Nvme1n1 : 1.00 265255.07 1036.15 0.00 0.00 480.48 211.32 1377.75 00:32:55.533 [2024-12-05T19:52:48.974Z] =================================================================================================================== 00:32:55.533 [2024-12-05T19:52:48.974Z] Total : 265255.07 1036.15 0.00 0.00 480.48 211.32 1377.75 00:32:55.533 7916.00 IOPS, 30.92 MiB/s 00:32:55.533 Latency(us) 00:32:55.533 [2024-12-05T19:52:48.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:55.533 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:32:55.533 Nvme1n1 : 1.02 7936.66 31.00 0.00 0.00 16028.99 3261.91 21209.83 00:32:55.533 [2024-12-05T19:52:48.974Z] =================================================================================================================== 00:32:55.533 [2024-12-05T19:52:48.974Z] Total : 7936.66 31.00 0.00 0.00 16028.99 3261.91 21209.83 00:32:55.533 14295.00 IOPS, 55.84 MiB/s [2024-12-05T19:52:48.974Z] 7426.00 IOPS, 29.01 MiB/s 00:32:55.533 Latency(us) 00:32:55.533 [2024-12-05T19:52:48.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:55.533 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:32:55.533 Nvme1n1 : 1.01 14355.00 56.07 0.00 0.00 8891.42 1899.05 13583.83 00:32:55.533 [2024-12-05T19:52:48.974Z] =================================================================================================================== 00:32:55.533 [2024-12-05T19:52:48.974Z] Total : 14355.00 56.07 0.00 0.00 8891.42 1899.05 13583.83 00:32:55.533 00:32:55.533 Latency(us) 00:32:55.533 [2024-12-05T19:52:48.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:55.533 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:32:55.533 Nvme1n1 : 1.01 7515.63 29.36 0.00 0.00 16984.76 4587.52 32887.16 00:32:55.533 [2024-12-05T19:52:48.974Z] =================================================================================================================== 00:32:55.533 [2024-12-05T19:52:48.974Z] Total : 7515.63 29.36 0.00 0.00 16984.76 4587.52 32887.16 00:32:55.533 20:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 581851 00:32:55.533 20:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 581853 00:32:55.533 20:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 581856 00:32:55.533 20:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:55.533 20:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.533 20:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:55.533 20:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.533 20:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:32:55.533 20:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:32:55.533 20:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:55.534 20:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:32:55.534 20:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:55.534 20:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:32:55.534 20:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:55.534 20:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:55.534 rmmod nvme_tcp 00:32:55.793 rmmod nvme_fabrics 00:32:55.793 rmmod nvme_keyring 00:32:55.793 20:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:55.793 20:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:32:55.793 20:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:32:55.793 20:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 581797 ']' 00:32:55.793 20:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 581797 00:32:55.793 20:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 581797 ']' 00:32:55.793 20:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 581797 00:32:55.793 20:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:32:55.793 20:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:55.793 20:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 581797 00:32:55.793 20:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:55.793 20:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:55.793 20:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 581797' 00:32:55.793 killing process with pid 581797 00:32:55.793 20:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 581797 00:32:55.793 20:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 581797 00:32:55.793 20:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:55.793 20:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:55.793 20:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:55.793 20:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:32:55.793 20:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:32:55.793 20:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:55.793 20:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:32:56.052 20:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:56.052 20:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:56.052 20:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:56.052 20:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:56.052 20:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:57.958 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:57.958 00:32:57.958 real 0m11.329s 00:32:57.958 user 0m15.003s 00:32:57.958 sys 0m6.314s 00:32:57.958 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:57.959 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:57.959 ************************************ 00:32:57.959 END TEST nvmf_bdev_io_wait 00:32:57.959 ************************************ 00:32:57.959 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:57.959 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:57.959 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:57.959 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:57.959 ************************************ 00:32:57.959 START TEST nvmf_queue_depth 00:32:57.959 ************************************ 00:32:57.959 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:58.219 * Looking for test storage... 00:32:58.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:58.219 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:58.219 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:32:58.219 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:58.219 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:58.219 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:58.219 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:58.219 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:58.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:58.220 --rc genhtml_branch_coverage=1 00:32:58.220 --rc genhtml_function_coverage=1 00:32:58.220 --rc genhtml_legend=1 00:32:58.220 --rc geninfo_all_blocks=1 00:32:58.220 --rc geninfo_unexecuted_blocks=1 00:32:58.220 00:32:58.220 ' 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:58.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:58.220 --rc genhtml_branch_coverage=1 00:32:58.220 --rc genhtml_function_coverage=1 00:32:58.220 --rc genhtml_legend=1 00:32:58.220 --rc geninfo_all_blocks=1 00:32:58.220 --rc geninfo_unexecuted_blocks=1 00:32:58.220 00:32:58.220 ' 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:58.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:58.220 --rc genhtml_branch_coverage=1 00:32:58.220 --rc genhtml_function_coverage=1 00:32:58.220 --rc genhtml_legend=1 00:32:58.220 --rc geninfo_all_blocks=1 00:32:58.220 --rc geninfo_unexecuted_blocks=1 00:32:58.220 00:32:58.220 ' 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:58.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:58.220 --rc genhtml_branch_coverage=1 00:32:58.220 --rc genhtml_function_coverage=1 00:32:58.220 --rc genhtml_legend=1 00:32:58.220 --rc geninfo_all_blocks=1 00:32:58.220 --rc geninfo_unexecuted_blocks=1 00:32:58.220 00:32:58.220 ' 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:58.220 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:58.221 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:58.221 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:58.221 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:58.221 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:58.221 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:58.221 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:32:58.221 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:32:58.221 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:58.221 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:32:58.221 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:58.221 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:58.221 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:58.221 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:58.221 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:58.221 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:58.221 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:58.221 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:58.221 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:58.221 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:58.221 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:32:58.221 20:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:04.790 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:04.790 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:33:04.790 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:04.790 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:04.790 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:04.790 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:04.790 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:04.790 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:33:04.790 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:04.790 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:33:04.790 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:33:04.790 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:33:04.790 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:33:04.790 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:33:04.790 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:33:04.790 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:04.790 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:04.790 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:04.790 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:04.790 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:04.791 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:04.791 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:04.791 Found net devices under 0000:af:00.0: cvl_0_0 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:04.791 Found net devices under 0000:af:00.1: cvl_0_1 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:04.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:04.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:33:04.791 00:33:04.791 --- 10.0.0.2 ping statistics --- 00:33:04.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:04.791 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:04.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:04.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:33:04.791 00:33:04.791 --- 10.0.0.1 ping statistics --- 00:33:04.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:04.791 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:04.791 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:04.792 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:04.792 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:04.792 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:04.792 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:04.792 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:33:04.792 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:04.792 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:04.792 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:04.792 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=585850 00:33:04.792 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 585850 00:33:04.792 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:04.792 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 585850 ']' 00:33:04.792 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:04.792 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:04.792 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:04.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:04.792 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:04.792 20:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:04.792 [2024-12-05 20:52:57.565162] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:04.792 [2024-12-05 20:52:57.566030] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:33:04.792 [2024-12-05 20:52:57.566067] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:04.792 [2024-12-05 20:52:57.641902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:04.792 [2024-12-05 20:52:57.680752] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:04.792 [2024-12-05 20:52:57.680787] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:04.792 [2024-12-05 20:52:57.680794] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:04.792 [2024-12-05 20:52:57.680800] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:04.792 [2024-12-05 20:52:57.680804] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:04.792 [2024-12-05 20:52:57.681330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:04.792 [2024-12-05 20:52:57.747437] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:04.792 [2024-12-05 20:52:57.747627] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:05.052 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:05.052 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:05.052 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:05.052 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:05.052 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:05.052 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:05.052 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:05.052 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.052 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:05.052 [2024-12-05 20:52:58.409997] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:05.052 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.052 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:05.052 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.052 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:05.052 Malloc0 00:33:05.052 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.052 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:05.052 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.052 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:05.052 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.052 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:05.052 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.052 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:05.052 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.052 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:05.052 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.052 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:05.052 [2024-12-05 20:52:58.490154] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:05.311 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.311 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=586125 00:33:05.311 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:05.311 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:33:05.311 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 586125 /var/tmp/bdevperf.sock 00:33:05.311 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 586125 ']' 00:33:05.311 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:05.311 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:05.311 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:05.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:05.311 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:05.311 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:05.311 [2024-12-05 20:52:58.543572] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:33:05.311 [2024-12-05 20:52:58.543619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid586125 ] 00:33:05.311 [2024-12-05 20:52:58.617478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:05.311 [2024-12-05 20:52:58.655287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:05.571 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:05.571 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:05.571 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:05.571 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.571 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:05.571 NVMe0n1 00:33:05.571 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.571 20:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:05.571 Running I/O for 10 seconds... 00:33:07.885 13077.00 IOPS, 51.08 MiB/s [2024-12-05T19:53:02.260Z] 13174.50 IOPS, 51.46 MiB/s [2024-12-05T19:53:03.245Z] 13303.33 IOPS, 51.97 MiB/s [2024-12-05T19:53:04.180Z] 13391.25 IOPS, 52.31 MiB/s [2024-12-05T19:53:05.116Z] 13493.20 IOPS, 52.71 MiB/s [2024-12-05T19:53:06.050Z] 13525.17 IOPS, 52.83 MiB/s [2024-12-05T19:53:06.987Z] 13584.86 IOPS, 53.07 MiB/s [2024-12-05T19:53:08.362Z] 13571.50 IOPS, 53.01 MiB/s [2024-12-05T19:53:09.309Z] 13601.33 IOPS, 53.13 MiB/s [2024-12-05T19:53:09.309Z] 13618.50 IOPS, 53.20 MiB/s 00:33:15.868 Latency(us) 00:33:15.868 [2024-12-05T19:53:09.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:15.868 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:33:15.868 Verification LBA range: start 0x0 length 0x4000 00:33:15.868 NVMe0n1 : 10.06 13637.06 53.27 0.00 0.00 74849.07 17277.67 49092.42 00:33:15.868 [2024-12-05T19:53:09.309Z] =================================================================================================================== 00:33:15.868 [2024-12-05T19:53:09.309Z] Total : 13637.06 53.27 0.00 0.00 74849.07 17277.67 49092.42 00:33:15.868 { 00:33:15.868 "results": [ 00:33:15.868 { 00:33:15.868 "job": "NVMe0n1", 00:33:15.868 "core_mask": "0x1", 00:33:15.868 "workload": "verify", 00:33:15.868 "status": "finished", 00:33:15.868 "verify_range": { 00:33:15.868 "start": 0, 00:33:15.868 "length": 16384 00:33:15.868 }, 00:33:15.868 "queue_depth": 1024, 00:33:15.868 "io_size": 4096, 00:33:15.868 "runtime": 10.060816, 00:33:15.868 "iops": 13637.064826550848, 00:33:15.868 "mibps": 53.26978447871425, 00:33:15.868 "io_failed": 0, 00:33:15.868 "io_timeout": 0, 00:33:15.868 "avg_latency_us": 74849.07031476278, 00:33:15.868 "min_latency_us": 17277.672727272726, 00:33:15.868 "max_latency_us": 49092.42181818182 00:33:15.868 } 00:33:15.868 ], 00:33:15.868 "core_count": 1 00:33:15.868 } 00:33:15.868 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 586125 00:33:15.868 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 586125 ']' 00:33:15.868 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 586125 00:33:15.868 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:15.868 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:15.868 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 586125 00:33:15.868 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:15.868 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:15.868 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 586125' 00:33:15.868 killing process with pid 586125 00:33:15.868 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 586125 00:33:15.868 Received shutdown signal, test time was about 10.000000 seconds 00:33:15.868 00:33:15.868 Latency(us) 00:33:15.868 [2024-12-05T19:53:09.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:15.868 [2024-12-05T19:53:09.309Z] =================================================================================================================== 00:33:15.868 [2024-12-05T19:53:09.309Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:15.868 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 586125 00:33:15.868 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:33:15.868 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:33:15.868 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:15.868 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:33:15.868 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:15.868 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:33:15.868 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:15.868 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:15.868 rmmod nvme_tcp 00:33:15.868 rmmod nvme_fabrics 00:33:16.125 rmmod nvme_keyring 00:33:16.125 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:16.125 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:33:16.125 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:33:16.125 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 585850 ']' 00:33:16.125 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 585850 00:33:16.125 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 585850 ']' 00:33:16.125 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 585850 00:33:16.125 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:16.125 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:16.125 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 585850 00:33:16.125 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:16.125 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:16.125 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 585850' 00:33:16.125 killing process with pid 585850 00:33:16.125 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 585850 00:33:16.125 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 585850 00:33:16.125 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:16.125 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:16.125 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:16.125 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:33:16.125 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:33:16.125 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:16.125 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:33:16.384 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:16.384 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:16.384 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:16.384 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:16.384 20:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:18.289 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:18.289 00:33:18.289 real 0m20.256s 00:33:18.289 user 0m22.643s 00:33:18.289 sys 0m6.417s 00:33:18.289 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:18.289 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:18.289 ************************************ 00:33:18.289 END TEST nvmf_queue_depth 00:33:18.289 ************************************ 00:33:18.289 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:18.289 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:18.289 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:18.289 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:18.289 ************************************ 00:33:18.289 START TEST nvmf_target_multipath 00:33:18.289 ************************************ 00:33:18.289 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:18.548 * Looking for test storage... 00:33:18.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:18.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.548 --rc genhtml_branch_coverage=1 00:33:18.548 --rc genhtml_function_coverage=1 00:33:18.548 --rc genhtml_legend=1 00:33:18.548 --rc geninfo_all_blocks=1 00:33:18.548 --rc geninfo_unexecuted_blocks=1 00:33:18.548 00:33:18.548 ' 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:18.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.548 --rc genhtml_branch_coverage=1 00:33:18.548 --rc genhtml_function_coverage=1 00:33:18.548 --rc genhtml_legend=1 00:33:18.548 --rc geninfo_all_blocks=1 00:33:18.548 --rc geninfo_unexecuted_blocks=1 00:33:18.548 00:33:18.548 ' 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:18.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.548 --rc genhtml_branch_coverage=1 00:33:18.548 --rc genhtml_function_coverage=1 00:33:18.548 --rc genhtml_legend=1 00:33:18.548 --rc geninfo_all_blocks=1 00:33:18.548 --rc geninfo_unexecuted_blocks=1 00:33:18.548 00:33:18.548 ' 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:18.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.548 --rc genhtml_branch_coverage=1 00:33:18.548 --rc genhtml_function_coverage=1 00:33:18.548 --rc genhtml_legend=1 00:33:18.548 --rc geninfo_all_blocks=1 00:33:18.548 --rc geninfo_unexecuted_blocks=1 00:33:18.548 00:33:18.548 ' 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:33:18.548 20:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:25.131 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:25.131 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:33:25.131 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:25.131 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:25.131 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:25.131 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:25.131 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:25.131 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:33:25.131 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:25.131 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:33:25.131 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:33:25.131 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:33:25.131 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:33:25.131 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:33:25.131 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:33:25.131 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:25.131 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:25.131 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:25.131 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:25.131 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:25.131 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:25.131 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:25.131 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:25.131 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:25.131 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:25.132 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:25.132 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:25.132 Found net devices under 0000:af:00.0: cvl_0_0 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:25.132 Found net devices under 0000:af:00.1: cvl_0_1 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:25.132 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:25.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:25.133 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:33:25.133 00:33:25.133 --- 10.0.0.2 ping statistics --- 00:33:25.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.133 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:25.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:25.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:33:25.133 00:33:25.133 --- 10.0.0.1 ping statistics --- 00:33:25.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.133 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:33:25.133 only one NIC for nvmf test 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:25.133 rmmod nvme_tcp 00:33:25.133 rmmod nvme_fabrics 00:33:25.133 rmmod nvme_keyring 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:25.133 20:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.040 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:27.040 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:33:27.040 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:33:27.040 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:27.040 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:27.040 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:27.040 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:27.040 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:27.040 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:27.040 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:27.040 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:27.040 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:27.040 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:27.040 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:27.040 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:27.040 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:27.040 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:27.040 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:27.040 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:27.040 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:27.040 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:27.040 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:27.040 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:27.040 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:27.040 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.040 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:27.040 00:33:27.040 real 0m8.351s 00:33:27.040 user 0m1.773s 00:33:27.040 sys 0m4.582s 00:33:27.040 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:27.041 ************************************ 00:33:27.041 END TEST nvmf_target_multipath 00:33:27.041 ************************************ 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:27.041 ************************************ 00:33:27.041 START TEST nvmf_zcopy 00:33:27.041 ************************************ 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:27.041 * Looking for test storage... 00:33:27.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:27.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.041 --rc genhtml_branch_coverage=1 00:33:27.041 --rc genhtml_function_coverage=1 00:33:27.041 --rc genhtml_legend=1 00:33:27.041 --rc geninfo_all_blocks=1 00:33:27.041 --rc geninfo_unexecuted_blocks=1 00:33:27.041 00:33:27.041 ' 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:27.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.041 --rc genhtml_branch_coverage=1 00:33:27.041 --rc genhtml_function_coverage=1 00:33:27.041 --rc genhtml_legend=1 00:33:27.041 --rc geninfo_all_blocks=1 00:33:27.041 --rc geninfo_unexecuted_blocks=1 00:33:27.041 00:33:27.041 ' 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:27.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.041 --rc genhtml_branch_coverage=1 00:33:27.041 --rc genhtml_function_coverage=1 00:33:27.041 --rc genhtml_legend=1 00:33:27.041 --rc geninfo_all_blocks=1 00:33:27.041 --rc geninfo_unexecuted_blocks=1 00:33:27.041 00:33:27.041 ' 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:27.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.041 --rc genhtml_branch_coverage=1 00:33:27.041 --rc genhtml_function_coverage=1 00:33:27.041 --rc genhtml_legend=1 00:33:27.041 --rc geninfo_all_blocks=1 00:33:27.041 --rc geninfo_unexecuted_blocks=1 00:33:27.041 00:33:27.041 ' 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.041 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.042 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.042 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:33:27.042 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.042 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:33:27.042 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:27.042 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:27.042 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:27.042 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:27.042 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:27.042 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:27.042 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:27.042 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:27.042 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:27.042 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:27.042 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:33:27.042 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:27.042 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:27.042 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:27.042 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:27.042 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:27.042 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:27.042 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:27.042 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.042 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:27.042 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:27.042 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:33:27.042 20:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:33.605 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:33.605 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:33.605 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:33.606 Found net devices under 0000:af:00.0: cvl_0_0 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:33.606 Found net devices under 0000:af:00.1: cvl_0_1 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:33.606 20:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:33.606 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:33.606 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:33.606 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:33.606 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:33.606 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:33.606 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:33.606 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:33.606 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:33.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:33.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:33:33.606 00:33:33.606 --- 10.0.0.2 ping statistics --- 00:33:33.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:33.606 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:33:33.606 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:33.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:33.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:33:33.606 00:33:33.606 --- 10.0.0.1 ping statistics --- 00:33:33.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:33.606 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:33:33.606 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:33.606 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:33:33.606 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:33.606 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:33.606 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:33.606 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:33.606 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:33.606 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:33.606 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:33.606 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:33:33.606 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:33.606 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:33.606 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:33.606 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=595155 00:33:33.606 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:33.606 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 595155 00:33:33.606 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 595155 ']' 00:33:33.606 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:33.606 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:33.606 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:33.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:33.606 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:33.606 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:33.606 [2024-12-05 20:53:26.312654] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:33.606 [2024-12-05 20:53:26.313526] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:33:33.606 [2024-12-05 20:53:26.313558] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:33.606 [2024-12-05 20:53:26.386783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:33.606 [2024-12-05 20:53:26.424615] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:33.606 [2024-12-05 20:53:26.424651] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:33.606 [2024-12-05 20:53:26.424657] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:33.606 [2024-12-05 20:53:26.424663] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:33.607 [2024-12-05 20:53:26.424667] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:33.607 [2024-12-05 20:53:26.425210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:33.607 [2024-12-05 20:53:26.491900] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:33.607 [2024-12-05 20:53:26.492090] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:33.607 [2024-12-05 20:53:26.553857] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:33.607 [2024-12-05 20:53:26.582167] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:33.607 malloc0 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:33.607 { 00:33:33.607 "params": { 00:33:33.607 "name": "Nvme$subsystem", 00:33:33.607 "trtype": "$TEST_TRANSPORT", 00:33:33.607 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:33.607 "adrfam": "ipv4", 00:33:33.607 "trsvcid": "$NVMF_PORT", 00:33:33.607 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:33.607 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:33.607 "hdgst": ${hdgst:-false}, 00:33:33.607 "ddgst": ${ddgst:-false} 00:33:33.607 }, 00:33:33.607 "method": "bdev_nvme_attach_controller" 00:33:33.607 } 00:33:33.607 EOF 00:33:33.607 )") 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:33.607 20:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:33.607 "params": { 00:33:33.607 "name": "Nvme1", 00:33:33.607 "trtype": "tcp", 00:33:33.607 "traddr": "10.0.0.2", 00:33:33.607 "adrfam": "ipv4", 00:33:33.607 "trsvcid": "4420", 00:33:33.607 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:33.607 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:33.607 "hdgst": false, 00:33:33.607 "ddgst": false 00:33:33.607 }, 00:33:33.607 "method": "bdev_nvme_attach_controller" 00:33:33.607 }' 00:33:33.607 [2024-12-05 20:53:26.676514] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:33:33.607 [2024-12-05 20:53:26.676560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid595197 ] 00:33:33.607 [2024-12-05 20:53:26.750913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:33.607 [2024-12-05 20:53:26.788790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.865 Running I/O for 10 seconds... 00:33:35.736 9267.00 IOPS, 72.40 MiB/s [2024-12-05T19:53:30.553Z] 9309.50 IOPS, 72.73 MiB/s [2024-12-05T19:53:31.122Z] 9314.67 IOPS, 72.77 MiB/s [2024-12-05T19:53:32.500Z] 9316.25 IOPS, 72.78 MiB/s [2024-12-05T19:53:33.435Z] 9329.00 IOPS, 72.88 MiB/s [2024-12-05T19:53:34.372Z] 9354.33 IOPS, 73.08 MiB/s [2024-12-05T19:53:35.308Z] 9355.86 IOPS, 73.09 MiB/s [2024-12-05T19:53:36.244Z] 9368.75 IOPS, 73.19 MiB/s [2024-12-05T19:53:37.182Z] 9369.89 IOPS, 73.20 MiB/s [2024-12-05T19:53:37.182Z] 9373.80 IOPS, 73.23 MiB/s 00:33:43.741 Latency(us) 00:33:43.741 [2024-12-05T19:53:37.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:43.741 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:43.741 Verification LBA range: start 0x0 length 0x1000 00:33:43.741 Nvme1n1 : 10.01 9376.12 73.25 0.00 0.00 13613.72 2189.50 19303.33 00:33:43.741 [2024-12-05T19:53:37.182Z] =================================================================================================================== 00:33:43.741 [2024-12-05T19:53:37.182Z] Total : 9376.12 73.25 0.00 0.00 13613.72 2189.50 19303.33 00:33:44.000 20:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=597027 00:33:44.000 20:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:44.000 20:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:44.000 20:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:44.001 20:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:44.001 20:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:44.001 20:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:44.001 20:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:44.001 20:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:44.001 { 00:33:44.001 "params": { 00:33:44.001 "name": "Nvme$subsystem", 00:33:44.001 "trtype": "$TEST_TRANSPORT", 00:33:44.001 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:44.001 "adrfam": "ipv4", 00:33:44.001 "trsvcid": "$NVMF_PORT", 00:33:44.001 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:44.001 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:44.001 "hdgst": ${hdgst:-false}, 00:33:44.001 "ddgst": ${ddgst:-false} 00:33:44.001 }, 00:33:44.001 "method": "bdev_nvme_attach_controller" 00:33:44.001 } 00:33:44.001 EOF 00:33:44.001 )") 00:33:44.001 [2024-12-05 20:53:37.297538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.001 [2024-12-05 20:53:37.297567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.001 20:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:44.001 20:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:44.001 20:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:44.001 20:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:44.001 "params": { 00:33:44.001 "name": "Nvme1", 00:33:44.001 "trtype": "tcp", 00:33:44.001 "traddr": "10.0.0.2", 00:33:44.001 "adrfam": "ipv4", 00:33:44.001 "trsvcid": "4420", 00:33:44.001 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:44.001 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:44.001 "hdgst": false, 00:33:44.001 "ddgst": false 00:33:44.001 }, 00:33:44.001 "method": "bdev_nvme_attach_controller" 00:33:44.001 }' 00:33:44.001 [2024-12-05 20:53:37.309500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.001 [2024-12-05 20:53:37.309514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.001 [2024-12-05 20:53:37.321498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.001 [2024-12-05 20:53:37.321508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.001 [2024-12-05 20:53:37.333495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.001 [2024-12-05 20:53:37.333504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.001 [2024-12-05 20:53:37.340242] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:33:44.001 [2024-12-05 20:53:37.340282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid597027 ] 00:33:44.001 [2024-12-05 20:53:37.345498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.001 [2024-12-05 20:53:37.345507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.001 [2024-12-05 20:53:37.357496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.001 [2024-12-05 20:53:37.357506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.001 [2024-12-05 20:53:37.369498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.001 [2024-12-05 20:53:37.369507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.001 [2024-12-05 20:53:37.381496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.001 [2024-12-05 20:53:37.381504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.001 [2024-12-05 20:53:37.393499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.001 [2024-12-05 20:53:37.393511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.001 [2024-12-05 20:53:37.405496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.001 [2024-12-05 20:53:37.405505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.001 [2024-12-05 20:53:37.412894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.001 [2024-12-05 20:53:37.417499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.001 [2024-12-05 20:53:37.417508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.001 [2024-12-05 20:53:37.429499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.001 [2024-12-05 20:53:37.429510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.260 [2024-12-05 20:53:37.441498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.260 [2024-12-05 20:53:37.441508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.260 [2024-12-05 20:53:37.451325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:44.260 [2024-12-05 20:53:37.453497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.260 [2024-12-05 20:53:37.453508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.260 [2024-12-05 20:53:37.465516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.260 [2024-12-05 20:53:37.465536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.260 [2024-12-05 20:53:37.477505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.260 [2024-12-05 20:53:37.477524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.260 [2024-12-05 20:53:37.489513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.260 [2024-12-05 20:53:37.489528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.260 [2024-12-05 20:53:37.501497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.260 [2024-12-05 20:53:37.501508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.260 [2024-12-05 20:53:37.513501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.260 [2024-12-05 20:53:37.513512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.260 [2024-12-05 20:53:37.525505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.260 [2024-12-05 20:53:37.525514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.260 [2024-12-05 20:53:37.537507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.260 [2024-12-05 20:53:37.537525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.260 [2024-12-05 20:53:37.549500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.260 [2024-12-05 20:53:37.549514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.260 [2024-12-05 20:53:37.561503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.260 [2024-12-05 20:53:37.561518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.260 [2024-12-05 20:53:37.573501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.260 [2024-12-05 20:53:37.573514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.260 [2024-12-05 20:53:37.585500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.260 [2024-12-05 20:53:37.585513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.260 [2024-12-05 20:53:37.597502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.260 [2024-12-05 20:53:37.597518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.260 Running I/O for 5 seconds... 00:33:44.260 [2024-12-05 20:53:37.610972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.260 [2024-12-05 20:53:37.610991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.260 [2024-12-05 20:53:37.625319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.260 [2024-12-05 20:53:37.625337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.260 [2024-12-05 20:53:37.638147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.260 [2024-12-05 20:53:37.638165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.260 [2024-12-05 20:53:37.650766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.260 [2024-12-05 20:53:37.650783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.260 [2024-12-05 20:53:37.665508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.260 [2024-12-05 20:53:37.665526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.260 [2024-12-05 20:53:37.678368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.260 [2024-12-05 20:53:37.678386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.260 [2024-12-05 20:53:37.693251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.260 [2024-12-05 20:53:37.693269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.520 [2024-12-05 20:53:37.707208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.520 [2024-12-05 20:53:37.707226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.520 [2024-12-05 20:53:37.721449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.520 [2024-12-05 20:53:37.721471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.520 [2024-12-05 20:53:37.734891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.520 [2024-12-05 20:53:37.734909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.520 [2024-12-05 20:53:37.748925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.520 [2024-12-05 20:53:37.748943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.520 [2024-12-05 20:53:37.762678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.520 [2024-12-05 20:53:37.762695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.520 [2024-12-05 20:53:37.773972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.520 [2024-12-05 20:53:37.773989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.520 [2024-12-05 20:53:37.786882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.520 [2024-12-05 20:53:37.786899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.520 [2024-12-05 20:53:37.801073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.520 [2024-12-05 20:53:37.801091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.520 [2024-12-05 20:53:37.814814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.520 [2024-12-05 20:53:37.814833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.520 [2024-12-05 20:53:37.828943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.520 [2024-12-05 20:53:37.828960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.520 [2024-12-05 20:53:37.842398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.520 [2024-12-05 20:53:37.842415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.520 [2024-12-05 20:53:37.857271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.520 [2024-12-05 20:53:37.857289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.520 [2024-12-05 20:53:37.870851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.520 [2024-12-05 20:53:37.870869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.520 [2024-12-05 20:53:37.884944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.520 [2024-12-05 20:53:37.884961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.520 [2024-12-05 20:53:37.898229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.520 [2024-12-05 20:53:37.898247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.520 [2024-12-05 20:53:37.913073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.520 [2024-12-05 20:53:37.913092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.520 [2024-12-05 20:53:37.926456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.520 [2024-12-05 20:53:37.926475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.520 [2024-12-05 20:53:37.940609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.520 [2024-12-05 20:53:37.940627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.521 [2024-12-05 20:53:37.954124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.521 [2024-12-05 20:53:37.954140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.780 [2024-12-05 20:53:37.968875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.780 [2024-12-05 20:53:37.968894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.780 [2024-12-05 20:53:37.982335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.780 [2024-12-05 20:53:37.982353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.780 [2024-12-05 20:53:37.993757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.780 [2024-12-05 20:53:37.993774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.780 [2024-12-05 20:53:38.008810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.780 [2024-12-05 20:53:38.008828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.780 [2024-12-05 20:53:38.022423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.780 [2024-12-05 20:53:38.022441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.780 [2024-12-05 20:53:38.036706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.780 [2024-12-05 20:53:38.036724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.780 [2024-12-05 20:53:38.049975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.780 [2024-12-05 20:53:38.049993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.780 [2024-12-05 20:53:38.062198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.780 [2024-12-05 20:53:38.062216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.780 [2024-12-05 20:53:38.075092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.780 [2024-12-05 20:53:38.075110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.780 [2024-12-05 20:53:38.089404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.780 [2024-12-05 20:53:38.089421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.780 [2024-12-05 20:53:38.102798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.780 [2024-12-05 20:53:38.102816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.780 [2024-12-05 20:53:38.117111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.780 [2024-12-05 20:53:38.117129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.780 [2024-12-05 20:53:38.130810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.780 [2024-12-05 20:53:38.130827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.780 [2024-12-05 20:53:38.145506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.780 [2024-12-05 20:53:38.145525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.780 [2024-12-05 20:53:38.157975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.781 [2024-12-05 20:53:38.157992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.781 [2024-12-05 20:53:38.170847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.781 [2024-12-05 20:53:38.170865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.781 [2024-12-05 20:53:38.184966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.781 [2024-12-05 20:53:38.184983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.781 [2024-12-05 20:53:38.198211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.781 [2024-12-05 20:53:38.198229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.781 [2024-12-05 20:53:38.212954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.781 [2024-12-05 20:53:38.212988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.040 [2024-12-05 20:53:38.226207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.040 [2024-12-05 20:53:38.226225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.040 [2024-12-05 20:53:38.240880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.040 [2024-12-05 20:53:38.240898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.040 [2024-12-05 20:53:38.254104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.040 [2024-12-05 20:53:38.254121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.040 [2024-12-05 20:53:38.266662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.040 [2024-12-05 20:53:38.266680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.040 [2024-12-05 20:53:38.281047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.040 [2024-12-05 20:53:38.281070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.040 [2024-12-05 20:53:38.295157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.040 [2024-12-05 20:53:38.295175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.040 [2024-12-05 20:53:38.308717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.040 [2024-12-05 20:53:38.308735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.040 [2024-12-05 20:53:38.322374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.040 [2024-12-05 20:53:38.322391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.040 [2024-12-05 20:53:38.337046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.040 [2024-12-05 20:53:38.337071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.040 [2024-12-05 20:53:38.350546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.040 [2024-12-05 20:53:38.350564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.040 [2024-12-05 20:53:38.364956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.040 [2024-12-05 20:53:38.364974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.040 [2024-12-05 20:53:38.378354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.040 [2024-12-05 20:53:38.378371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.040 [2024-12-05 20:53:38.393221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.040 [2024-12-05 20:53:38.393239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.040 [2024-12-05 20:53:38.406805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.040 [2024-12-05 20:53:38.406822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.040 [2024-12-05 20:53:38.420870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.040 [2024-12-05 20:53:38.420888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.040 [2024-12-05 20:53:38.434601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.040 [2024-12-05 20:53:38.434618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.040 [2024-12-05 20:53:38.449245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.040 [2024-12-05 20:53:38.449262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.040 [2024-12-05 20:53:38.462826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.040 [2024-12-05 20:53:38.462844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.040 [2024-12-05 20:53:38.477240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.040 [2024-12-05 20:53:38.477258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.300 [2024-12-05 20:53:38.491069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.300 [2024-12-05 20:53:38.491103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.300 [2024-12-05 20:53:38.505564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.300 [2024-12-05 20:53:38.505581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.300 [2024-12-05 20:53:38.518046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.300 [2024-12-05 20:53:38.518070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.300 [2024-12-05 20:53:38.531100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.300 [2024-12-05 20:53:38.531117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.300 [2024-12-05 20:53:38.545256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.300 [2024-12-05 20:53:38.545274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.300 [2024-12-05 20:53:38.558693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.300 [2024-12-05 20:53:38.558710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.300 [2024-12-05 20:53:38.572630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.300 [2024-12-05 20:53:38.572648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.300 [2024-12-05 20:53:38.586350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.300 [2024-12-05 20:53:38.586367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.300 [2024-12-05 20:53:38.600764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.300 [2024-12-05 20:53:38.600781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.300 18290.00 IOPS, 142.89 MiB/s [2024-12-05T19:53:38.741Z] [2024-12-05 20:53:38.614131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.300 [2024-12-05 20:53:38.614148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.300 [2024-12-05 20:53:38.626845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.300 [2024-12-05 20:53:38.626863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.300 [2024-12-05 20:53:38.640746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.300 [2024-12-05 20:53:38.640764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.300 [2024-12-05 20:53:38.654476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.300 [2024-12-05 20:53:38.654493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.300 [2024-12-05 20:53:38.669677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.300 [2024-12-05 20:53:38.669696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.300 [2024-12-05 20:53:38.682002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.300 [2024-12-05 20:53:38.682020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.300 [2024-12-05 20:53:38.695076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.300 [2024-12-05 20:53:38.695093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.300 [2024-12-05 20:53:38.709532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.300 [2024-12-05 20:53:38.709549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.300 [2024-12-05 20:53:38.722145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.300 [2024-12-05 20:53:38.722162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.300 [2024-12-05 20:53:38.734712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.300 [2024-12-05 20:53:38.734729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.560 [2024-12-05 20:53:38.748762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.560 [2024-12-05 20:53:38.748784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.560 [2024-12-05 20:53:38.762422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.560 [2024-12-05 20:53:38.762440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.560 [2024-12-05 20:53:38.777550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.560 [2024-12-05 20:53:38.777568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.560 [2024-12-05 20:53:38.791092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.560 [2024-12-05 20:53:38.791110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.560 [2024-12-05 20:53:38.805762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.560 [2024-12-05 20:53:38.805778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.560 [2024-12-05 20:53:38.818170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.560 [2024-12-05 20:53:38.818189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.560 [2024-12-05 20:53:38.832761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.560 [2024-12-05 20:53:38.832780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.560 [2024-12-05 20:53:38.846393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.560 [2024-12-05 20:53:38.846410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.560 [2024-12-05 20:53:38.861247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.560 [2024-12-05 20:53:38.861264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.560 [2024-12-05 20:53:38.874876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.560 [2024-12-05 20:53:38.874894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.560 [2024-12-05 20:53:38.889120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.560 [2024-12-05 20:53:38.889138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.560 [2024-12-05 20:53:38.902334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.560 [2024-12-05 20:53:38.902351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.560 [2024-12-05 20:53:38.917265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.560 [2024-12-05 20:53:38.917282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.560 [2024-12-05 20:53:38.930751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.560 [2024-12-05 20:53:38.930768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.560 [2024-12-05 20:53:38.941559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.560 [2024-12-05 20:53:38.941576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.560 [2024-12-05 20:53:38.954509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.560 [2024-12-05 20:53:38.954527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.561 [2024-12-05 20:53:38.968699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.561 [2024-12-05 20:53:38.968716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.561 [2024-12-05 20:53:38.982453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.561 [2024-12-05 20:53:38.982471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.561 [2024-12-05 20:53:38.996752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.561 [2024-12-05 20:53:38.996770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.820 [2024-12-05 20:53:39.010256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.820 [2024-12-05 20:53:39.010279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.820 [2024-12-05 20:53:39.025326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.820 [2024-12-05 20:53:39.025343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.820 [2024-12-05 20:53:39.038452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.820 [2024-12-05 20:53:39.038469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.820 [2024-12-05 20:53:39.052803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.820 [2024-12-05 20:53:39.052820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.820 [2024-12-05 20:53:39.066047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.820 [2024-12-05 20:53:39.066069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.820 [2024-12-05 20:53:39.080935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.820 [2024-12-05 20:53:39.080952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.820 [2024-12-05 20:53:39.094687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.820 [2024-12-05 20:53:39.094705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.820 [2024-12-05 20:53:39.108812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.820 [2024-12-05 20:53:39.108830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.820 [2024-12-05 20:53:39.122637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.820 [2024-12-05 20:53:39.122655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.820 [2024-12-05 20:53:39.137040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.820 [2024-12-05 20:53:39.137064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.821 [2024-12-05 20:53:39.150713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.821 [2024-12-05 20:53:39.150730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.821 [2024-12-05 20:53:39.165431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.821 [2024-12-05 20:53:39.165449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.821 [2024-12-05 20:53:39.178941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.821 [2024-12-05 20:53:39.178959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.821 [2024-12-05 20:53:39.193261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.821 [2024-12-05 20:53:39.193279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.821 [2024-12-05 20:53:39.206322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.821 [2024-12-05 20:53:39.206340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.821 [2024-12-05 20:53:39.221254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.821 [2024-12-05 20:53:39.221271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.821 [2024-12-05 20:53:39.234555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.821 [2024-12-05 20:53:39.234573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.821 [2024-12-05 20:53:39.248772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.821 [2024-12-05 20:53:39.248789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.080 [2024-12-05 20:53:39.262574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.080 [2024-12-05 20:53:39.262593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.080 [2024-12-05 20:53:39.276975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.080 [2024-12-05 20:53:39.276996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.080 [2024-12-05 20:53:39.290388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.080 [2024-12-05 20:53:39.290406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.080 [2024-12-05 20:53:39.305048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.080 [2024-12-05 20:53:39.305071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.080 [2024-12-05 20:53:39.318428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.080 [2024-12-05 20:53:39.318445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.080 [2024-12-05 20:53:39.332766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.080 [2024-12-05 20:53:39.332783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.080 [2024-12-05 20:53:39.346440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.080 [2024-12-05 20:53:39.346457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.080 [2024-12-05 20:53:39.361066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.080 [2024-12-05 20:53:39.361084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.080 [2024-12-05 20:53:39.374072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.080 [2024-12-05 20:53:39.374100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.080 [2024-12-05 20:53:39.389132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.080 [2024-12-05 20:53:39.389150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.080 [2024-12-05 20:53:39.403040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.080 [2024-12-05 20:53:39.403065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.080 [2024-12-05 20:53:39.417309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.080 [2024-12-05 20:53:39.417328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.080 [2024-12-05 20:53:39.430762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.080 [2024-12-05 20:53:39.430780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.080 [2024-12-05 20:53:39.444612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.080 [2024-12-05 20:53:39.444630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.080 [2024-12-05 20:53:39.458267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.080 [2024-12-05 20:53:39.458285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.080 [2024-12-05 20:53:39.472925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.080 [2024-12-05 20:53:39.472943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.080 [2024-12-05 20:53:39.486228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.080 [2024-12-05 20:53:39.486245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.080 [2024-12-05 20:53:39.498642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.080 [2024-12-05 20:53:39.498659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.080 [2024-12-05 20:53:39.513478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.080 [2024-12-05 20:53:39.513495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.339 [2024-12-05 20:53:39.527089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.339 [2024-12-05 20:53:39.527107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.339 [2024-12-05 20:53:39.541328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.339 [2024-12-05 20:53:39.541350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.339 [2024-12-05 20:53:39.554520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.339 [2024-12-05 20:53:39.554537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.339 [2024-12-05 20:53:39.569321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.339 [2024-12-05 20:53:39.569339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.339 [2024-12-05 20:53:39.582708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.339 [2024-12-05 20:53:39.582725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.339 [2024-12-05 20:53:39.596820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.339 [2024-12-05 20:53:39.596838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.339 18319.50 IOPS, 143.12 MiB/s [2024-12-05T19:53:39.780Z] [2024-12-05 20:53:39.610425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.339 [2024-12-05 20:53:39.610443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.339 [2024-12-05 20:53:39.624771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.339 [2024-12-05 20:53:39.624788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.339 [2024-12-05 20:53:39.637977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.339 [2024-12-05 20:53:39.637994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.339 [2024-12-05 20:53:39.650239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.339 [2024-12-05 20:53:39.650257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.339 [2024-12-05 20:53:39.665131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.339 [2024-12-05 20:53:39.665149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.339 [2024-12-05 20:53:39.678757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.339 [2024-12-05 20:53:39.678776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.339 [2024-12-05 20:53:39.693326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.339 [2024-12-05 20:53:39.693345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.339 [2024-12-05 20:53:39.706705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.339 [2024-12-05 20:53:39.706727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.339 [2024-12-05 20:53:39.721228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.339 [2024-12-05 20:53:39.721246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.339 [2024-12-05 20:53:39.734847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.339 [2024-12-05 20:53:39.734865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.339 [2024-12-05 20:53:39.749409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.339 [2024-12-05 20:53:39.749427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.339 [2024-12-05 20:53:39.762669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.339 [2024-12-05 20:53:39.762688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.339 [2024-12-05 20:53:39.777151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.339 [2024-12-05 20:53:39.777169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.598 [2024-12-05 20:53:39.791341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.598 [2024-12-05 20:53:39.791360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.598 [2024-12-05 20:53:39.805211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.598 [2024-12-05 20:53:39.805230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.598 [2024-12-05 20:53:39.818935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.598 [2024-12-05 20:53:39.818952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.598 [2024-12-05 20:53:39.833472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.598 [2024-12-05 20:53:39.833490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.598 [2024-12-05 20:53:39.847066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.598 [2024-12-05 20:53:39.847084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.598 [2024-12-05 20:53:39.861420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.598 [2024-12-05 20:53:39.861441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.598 [2024-12-05 20:53:39.874936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.598 [2024-12-05 20:53:39.874954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.598 [2024-12-05 20:53:39.888873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.598 [2024-12-05 20:53:39.888891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.598 [2024-12-05 20:53:39.902427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.598 [2024-12-05 20:53:39.902445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.598 [2024-12-05 20:53:39.916995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.598 [2024-12-05 20:53:39.917013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.598 [2024-12-05 20:53:39.930450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.598 [2024-12-05 20:53:39.930468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.598 [2024-12-05 20:53:39.944345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.598 [2024-12-05 20:53:39.944362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.598 [2024-12-05 20:53:39.958107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.598 [2024-12-05 20:53:39.958124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.598 [2024-12-05 20:53:39.972621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.598 [2024-12-05 20:53:39.972640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.598 [2024-12-05 20:53:39.986491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.598 [2024-12-05 20:53:39.986508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.598 [2024-12-05 20:53:39.998900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.598 [2024-12-05 20:53:39.998918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.598 [2024-12-05 20:53:40.013919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.598 [2024-12-05 20:53:40.013936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.598 [2024-12-05 20:53:40.029189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.598 [2024-12-05 20:53:40.029207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.857 [2024-12-05 20:53:40.042869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.857 [2024-12-05 20:53:40.042890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.857 [2024-12-05 20:53:40.056851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.857 [2024-12-05 20:53:40.056870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.857 [2024-12-05 20:53:40.070864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.857 [2024-12-05 20:53:40.070882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.857 [2024-12-05 20:53:40.085486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.857 [2024-12-05 20:53:40.085504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.857 [2024-12-05 20:53:40.097720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.857 [2024-12-05 20:53:40.097738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.857 [2024-12-05 20:53:40.110448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.857 [2024-12-05 20:53:40.110466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.857 [2024-12-05 20:53:40.122996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.857 [2024-12-05 20:53:40.123013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.857 [2024-12-05 20:53:40.137156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.857 [2024-12-05 20:53:40.137173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.857 [2024-12-05 20:53:40.150207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.857 [2024-12-05 20:53:40.150231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.857 [2024-12-05 20:53:40.161745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.857 [2024-12-05 20:53:40.161761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.857 [2024-12-05 20:53:40.174947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.857 [2024-12-05 20:53:40.174964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.857 [2024-12-05 20:53:40.189828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.857 [2024-12-05 20:53:40.189846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.857 [2024-12-05 20:53:40.205257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.857 [2024-12-05 20:53:40.205275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.857 [2024-12-05 20:53:40.219356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.857 [2024-12-05 20:53:40.219374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.857 [2024-12-05 20:53:40.233779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.857 [2024-12-05 20:53:40.233796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.857 [2024-12-05 20:53:40.249041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.857 [2024-12-05 20:53:40.249065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.857 [2024-12-05 20:53:40.262819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.857 [2024-12-05 20:53:40.262836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.857 [2024-12-05 20:53:40.277200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.857 [2024-12-05 20:53:40.277219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.857 [2024-12-05 20:53:40.291201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.857 [2024-12-05 20:53:40.291218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.116 [2024-12-05 20:53:40.305231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.116 [2024-12-05 20:53:40.305250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.116 [2024-12-05 20:53:40.318367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.116 [2024-12-05 20:53:40.318389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.116 [2024-12-05 20:53:40.330311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.116 [2024-12-05 20:53:40.330334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.116 [2024-12-05 20:53:40.343743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.116 [2024-12-05 20:53:40.343760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.116 [2024-12-05 20:53:40.357299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.116 [2024-12-05 20:53:40.357317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.116 [2024-12-05 20:53:40.370620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.116 [2024-12-05 20:53:40.370638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.116 [2024-12-05 20:53:40.385399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.116 [2024-12-05 20:53:40.385417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.116 [2024-12-05 20:53:40.396945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.116 [2024-12-05 20:53:40.396963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.116 [2024-12-05 20:53:40.410742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.116 [2024-12-05 20:53:40.410759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.116 [2024-12-05 20:53:40.424875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.116 [2024-12-05 20:53:40.424892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.116 [2024-12-05 20:53:40.438464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.116 [2024-12-05 20:53:40.438481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.116 [2024-12-05 20:53:40.453219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.116 [2024-12-05 20:53:40.453236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.116 [2024-12-05 20:53:40.467236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.116 [2024-12-05 20:53:40.467254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.116 [2024-12-05 20:53:40.480797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.116 [2024-12-05 20:53:40.480815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.116 [2024-12-05 20:53:40.494753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.116 [2024-12-05 20:53:40.494770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.116 [2024-12-05 20:53:40.509259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.116 [2024-12-05 20:53:40.509276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.116 [2024-12-05 20:53:40.522805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.116 [2024-12-05 20:53:40.522822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.116 [2024-12-05 20:53:40.537025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.116 [2024-12-05 20:53:40.537043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.116 [2024-12-05 20:53:40.550770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.116 [2024-12-05 20:53:40.550787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.375 [2024-12-05 20:53:40.565084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.375 [2024-12-05 20:53:40.565103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.375 [2024-12-05 20:53:40.578624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.375 [2024-12-05 20:53:40.578646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.375 [2024-12-05 20:53:40.593396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.375 [2024-12-05 20:53:40.593414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.375 [2024-12-05 20:53:40.606072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.375 [2024-12-05 20:53:40.606090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.375 18241.33 IOPS, 142.51 MiB/s [2024-12-05T19:53:40.816Z] [2024-12-05 20:53:40.618854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.375 [2024-12-05 20:53:40.618871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.375 [2024-12-05 20:53:40.633086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.375 [2024-12-05 20:53:40.633103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.375 [2024-12-05 20:53:40.646545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.375 [2024-12-05 20:53:40.646562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.376 [2024-12-05 20:53:40.661365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.376 [2024-12-05 20:53:40.661383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.376 [2024-12-05 20:53:40.675100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.376 [2024-12-05 20:53:40.675117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.376 [2024-12-05 20:53:40.689463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.376 [2024-12-05 20:53:40.689481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.376 [2024-12-05 20:53:40.702050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.376 [2024-12-05 20:53:40.702073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.376 [2024-12-05 20:53:40.715044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.376 [2024-12-05 20:53:40.715067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.376 [2024-12-05 20:53:40.728970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.376 [2024-12-05 20:53:40.728986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.376 [2024-12-05 20:53:40.741943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.376 [2024-12-05 20:53:40.741959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.376 [2024-12-05 20:53:40.754403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.376 [2024-12-05 20:53:40.754421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.376 [2024-12-05 20:53:40.769549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.376 [2024-12-05 20:53:40.769566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.376 [2024-12-05 20:53:40.782643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.376 [2024-12-05 20:53:40.782661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.376 [2024-12-05 20:53:40.797359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.376 [2024-12-05 20:53:40.797376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.376 [2024-12-05 20:53:40.810875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.376 [2024-12-05 20:53:40.810892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.634 [2024-12-05 20:53:40.825169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.634 [2024-12-05 20:53:40.825187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.634 [2024-12-05 20:53:40.838425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.634 [2024-12-05 20:53:40.838446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.634 [2024-12-05 20:53:40.852717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.634 [2024-12-05 20:53:40.852734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.634 [2024-12-05 20:53:40.865687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.634 [2024-12-05 20:53:40.865705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.634 [2024-12-05 20:53:40.879070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.634 [2024-12-05 20:53:40.879095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.634 [2024-12-05 20:53:40.893976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.634 [2024-12-05 20:53:40.893994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.634 [2024-12-05 20:53:40.909599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.634 [2024-12-05 20:53:40.909618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.634 [2024-12-05 20:53:40.923152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.634 [2024-12-05 20:53:40.923169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.634 [2024-12-05 20:53:40.936559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.634 [2024-12-05 20:53:40.936582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.634 [2024-12-05 20:53:40.949643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.634 [2024-12-05 20:53:40.949662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.634 [2024-12-05 20:53:40.961962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.634 [2024-12-05 20:53:40.961979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.634 [2024-12-05 20:53:40.977190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.634 [2024-12-05 20:53:40.977207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.634 [2024-12-05 20:53:40.990646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.634 [2024-12-05 20:53:40.990664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.634 [2024-12-05 20:53:41.005121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.634 [2024-12-05 20:53:41.005139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.634 [2024-12-05 20:53:41.019012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.634 [2024-12-05 20:53:41.019030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.634 [2024-12-05 20:53:41.033299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.634 [2024-12-05 20:53:41.033317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.634 [2024-12-05 20:53:41.046892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.634 [2024-12-05 20:53:41.046910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.634 [2024-12-05 20:53:41.061088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.634 [2024-12-05 20:53:41.061106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.634 [2024-12-05 20:53:41.074006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.634 [2024-12-05 20:53:41.074024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.892 [2024-12-05 20:53:41.086929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.892 [2024-12-05 20:53:41.086947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.892 [2024-12-05 20:53:41.098544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.892 [2024-12-05 20:53:41.098567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.892 [2024-12-05 20:53:41.112858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.892 [2024-12-05 20:53:41.112876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.892 [2024-12-05 20:53:41.126169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.892 [2024-12-05 20:53:41.126187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.892 [2024-12-05 20:53:41.140896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.892 [2024-12-05 20:53:41.140914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.892 [2024-12-05 20:53:41.154241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.892 [2024-12-05 20:53:41.154259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.892 [2024-12-05 20:53:41.169226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.892 [2024-12-05 20:53:41.169243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.892 [2024-12-05 20:53:41.182593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.892 [2024-12-05 20:53:41.182611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.892 [2024-12-05 20:53:41.197085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.892 [2024-12-05 20:53:41.197103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.892 [2024-12-05 20:53:41.210502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.892 [2024-12-05 20:53:41.210519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.892 [2024-12-05 20:53:41.225104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.892 [2024-12-05 20:53:41.225121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.892 [2024-12-05 20:53:41.238475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.892 [2024-12-05 20:53:41.238493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.892 [2024-12-05 20:53:41.252509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.892 [2024-12-05 20:53:41.252526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.892 [2024-12-05 20:53:41.265846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.892 [2024-12-05 20:53:41.265863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.892 [2024-12-05 20:53:41.278779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.892 [2024-12-05 20:53:41.278797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.892 [2024-12-05 20:53:41.293045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.892 [2024-12-05 20:53:41.293069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.892 [2024-12-05 20:53:41.306465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.892 [2024-12-05 20:53:41.306484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.892 [2024-12-05 20:53:41.318569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.892 [2024-12-05 20:53:41.318586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.150 [2024-12-05 20:53:41.332917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.150 [2024-12-05 20:53:41.332936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.150 [2024-12-05 20:53:41.346699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.150 [2024-12-05 20:53:41.346717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.150 [2024-12-05 20:53:41.360436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.150 [2024-12-05 20:53:41.360454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.150 [2024-12-05 20:53:41.373802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.150 [2024-12-05 20:53:41.373819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.150 [2024-12-05 20:53:41.388789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.150 [2024-12-05 20:53:41.388807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.150 [2024-12-05 20:53:41.402403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.150 [2024-12-05 20:53:41.402420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.150 [2024-12-05 20:53:41.414708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.150 [2024-12-05 20:53:41.414726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.150 [2024-12-05 20:53:41.429522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.150 [2024-12-05 20:53:41.429540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.150 [2024-12-05 20:53:41.441981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.150 [2024-12-05 20:53:41.441998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.151 [2024-12-05 20:53:41.454962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.151 [2024-12-05 20:53:41.454980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.151 [2024-12-05 20:53:41.469477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.151 [2024-12-05 20:53:41.469495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.151 [2024-12-05 20:53:41.482613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.151 [2024-12-05 20:53:41.482630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.151 [2024-12-05 20:53:41.497204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.151 [2024-12-05 20:53:41.497221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.151 [2024-12-05 20:53:41.510843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.151 [2024-12-05 20:53:41.510861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.151 [2024-12-05 20:53:41.524917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.151 [2024-12-05 20:53:41.524935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.151 [2024-12-05 20:53:41.538521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.151 [2024-12-05 20:53:41.538539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.151 [2024-12-05 20:53:41.552742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.151 [2024-12-05 20:53:41.552759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.151 [2024-12-05 20:53:41.566377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.151 [2024-12-05 20:53:41.566395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.151 [2024-12-05 20:53:41.580656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.151 [2024-12-05 20:53:41.580673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.410 [2024-12-05 20:53:41.594412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.411 [2024-12-05 20:53:41.594432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.411 [2024-12-05 20:53:41.608919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.411 [2024-12-05 20:53:41.608936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.411 18250.75 IOPS, 142.58 MiB/s [2024-12-05T19:53:41.852Z] [2024-12-05 20:53:41.622522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.411 [2024-12-05 20:53:41.622540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.411 [2024-12-05 20:53:41.637247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.411 [2024-12-05 20:53:41.637266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.411 [2024-12-05 20:53:41.651212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.411 [2024-12-05 20:53:41.651230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.411 [2024-12-05 20:53:41.665335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.411 [2024-12-05 20:53:41.665352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.411 [2024-12-05 20:53:41.679100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.411 [2024-12-05 20:53:41.679118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.411 [2024-12-05 20:53:41.693121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.411 [2024-12-05 20:53:41.693138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.411 [2024-12-05 20:53:41.706972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.411 [2024-12-05 20:53:41.706989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.411 [2024-12-05 20:53:41.721252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.411 [2024-12-05 20:53:41.721269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.411 [2024-12-05 20:53:41.734666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.411 [2024-12-05 20:53:41.734683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.411 [2024-12-05 20:53:41.749789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.411 [2024-12-05 20:53:41.749805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.411 [2024-12-05 20:53:41.762780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.411 [2024-12-05 20:53:41.762796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.411 [2024-12-05 20:53:41.776741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.411 [2024-12-05 20:53:41.776758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.411 [2024-12-05 20:53:41.790128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.411 [2024-12-05 20:53:41.790145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.411 [2024-12-05 20:53:41.804798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.411 [2024-12-05 20:53:41.804815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.411 [2024-12-05 20:53:41.818045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.411 [2024-12-05 20:53:41.818083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.411 [2024-12-05 20:53:41.830293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.411 [2024-12-05 20:53:41.830309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.411 [2024-12-05 20:53:41.842128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.411 [2024-12-05 20:53:41.842146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.669 [2024-12-05 20:53:41.855244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.669 [2024-12-05 20:53:41.855263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.669 [2024-12-05 20:53:41.868596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.669 [2024-12-05 20:53:41.868618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.669 [2024-12-05 20:53:41.881444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.669 [2024-12-05 20:53:41.881462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.669 [2024-12-05 20:53:41.894598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.669 [2024-12-05 20:53:41.894616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.669 [2024-12-05 20:53:41.909397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.669 [2024-12-05 20:53:41.909415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.669 [2024-12-05 20:53:41.922753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.669 [2024-12-05 20:53:41.922772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.669 [2024-12-05 20:53:41.937044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.669 [2024-12-05 20:53:41.937069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.669 [2024-12-05 20:53:41.950327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.669 [2024-12-05 20:53:41.950345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.669 [2024-12-05 20:53:41.964779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.669 [2024-12-05 20:53:41.964796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.669 [2024-12-05 20:53:41.978015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.669 [2024-12-05 20:53:41.978032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.669 [2024-12-05 20:53:41.990182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.669 [2024-12-05 20:53:41.990199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.669 [2024-12-05 20:53:42.004777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.669 [2024-12-05 20:53:42.004795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.669 [2024-12-05 20:53:42.018284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.669 [2024-12-05 20:53:42.018300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.669 [2024-12-05 20:53:42.032861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.669 [2024-12-05 20:53:42.032877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.669 [2024-12-05 20:53:42.046188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.670 [2024-12-05 20:53:42.046205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.670 [2024-12-05 20:53:42.058613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.670 [2024-12-05 20:53:42.058630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.670 [2024-12-05 20:53:42.072655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.670 [2024-12-05 20:53:42.072673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.670 [2024-12-05 20:53:42.086721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.670 [2024-12-05 20:53:42.086739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.670 [2024-12-05 20:53:42.100996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.670 [2024-12-05 20:53:42.101014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.928 [2024-12-05 20:53:42.114500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.928 [2024-12-05 20:53:42.114518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.928 [2024-12-05 20:53:42.128679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.928 [2024-12-05 20:53:42.128700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.928 [2024-12-05 20:53:42.142664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.928 [2024-12-05 20:53:42.142686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.928 [2024-12-05 20:53:42.157115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.928 [2024-12-05 20:53:42.157133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.928 [2024-12-05 20:53:42.170758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.928 [2024-12-05 20:53:42.170776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.928 [2024-12-05 20:53:42.185118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.928 [2024-12-05 20:53:42.185135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.928 [2024-12-05 20:53:42.199407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.928 [2024-12-05 20:53:42.199425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.928 [2024-12-05 20:53:42.213326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.928 [2024-12-05 20:53:42.213343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.928 [2024-12-05 20:53:42.226919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.928 [2024-12-05 20:53:42.226936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.928 [2024-12-05 20:53:42.240862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.928 [2024-12-05 20:53:42.240879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.928 [2024-12-05 20:53:42.254565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.928 [2024-12-05 20:53:42.254582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.928 [2024-12-05 20:53:42.268578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.928 [2024-12-05 20:53:42.268595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.928 [2024-12-05 20:53:42.281887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.928 [2024-12-05 20:53:42.281903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.928 [2024-12-05 20:53:42.296929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.928 [2024-12-05 20:53:42.296946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.928 [2024-12-05 20:53:42.310581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.928 [2024-12-05 20:53:42.310598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.928 [2024-12-05 20:53:42.325120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.928 [2024-12-05 20:53:42.325137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.928 [2024-12-05 20:53:42.338628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.928 [2024-12-05 20:53:42.338645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.928 [2024-12-05 20:53:42.353080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.928 [2024-12-05 20:53:42.353097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:48.928 [2024-12-05 20:53:42.366426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:48.928 [2024-12-05 20:53:42.366444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.186 [2024-12-05 20:53:42.381097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.186 [2024-12-05 20:53:42.381116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.186 [2024-12-05 20:53:42.394308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.186 [2024-12-05 20:53:42.394329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.186 [2024-12-05 20:53:42.405779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.186 [2024-12-05 20:53:42.405795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.186 [2024-12-05 20:53:42.418907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.186 [2024-12-05 20:53:42.418924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.186 [2024-12-05 20:53:42.433370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.186 [2024-12-05 20:53:42.433389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.186 [2024-12-05 20:53:42.446617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.186 [2024-12-05 20:53:42.446635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.186 [2024-12-05 20:53:42.458015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.186 [2024-12-05 20:53:42.458034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.186 [2024-12-05 20:53:42.470985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.186 [2024-12-05 20:53:42.471004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.186 [2024-12-05 20:53:42.484945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.186 [2024-12-05 20:53:42.484964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.186 [2024-12-05 20:53:42.498680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.186 [2024-12-05 20:53:42.498698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.186 [2024-12-05 20:53:42.513295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.186 [2024-12-05 20:53:42.513314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.186 [2024-12-05 20:53:42.526797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.186 [2024-12-05 20:53:42.526814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.186 [2024-12-05 20:53:42.537395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.186 [2024-12-05 20:53:42.537412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.186 [2024-12-05 20:53:42.550638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.186 [2024-12-05 20:53:42.550655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.186 [2024-12-05 20:53:42.565114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.186 [2024-12-05 20:53:42.565132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.186 [2024-12-05 20:53:42.578672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.186 [2024-12-05 20:53:42.578690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.186 [2024-12-05 20:53:42.592732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.186 [2024-12-05 20:53:42.592749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.186 [2024-12-05 20:53:42.606702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.186 [2024-12-05 20:53:42.606720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.186 18280.80 IOPS, 142.82 MiB/s [2024-12-05T19:53:42.627Z] [2024-12-05 20:53:42.620099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.186 [2024-12-05 20:53:42.620117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.187 00:33:49.187 Latency(us) 00:33:49.187 [2024-12-05T19:53:42.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:49.187 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:33:49.187 Nvme1n1 : 5.01 18280.91 142.82 0.00 0.00 6994.99 1846.92 12392.26 00:33:49.187 [2024-12-05T19:53:42.628Z] =================================================================================================================== 00:33:49.187 [2024-12-05T19:53:42.628Z] Total : 18280.91 142.82 0.00 0.00 6994.99 1846.92 12392.26 00:33:49.445 [2024-12-05 20:53:42.629501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.445 [2024-12-05 20:53:42.629520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.445 [2024-12-05 20:53:42.641502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.445 [2024-12-05 20:53:42.641516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.445 [2024-12-05 20:53:42.653512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.445 [2024-12-05 20:53:42.653528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.445 [2024-12-05 20:53:42.665505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.445 [2024-12-05 20:53:42.665520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.445 [2024-12-05 20:53:42.677505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.445 [2024-12-05 20:53:42.677518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.445 [2024-12-05 20:53:42.689501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.445 [2024-12-05 20:53:42.689513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.445 [2024-12-05 20:53:42.701501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.445 [2024-12-05 20:53:42.701514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.445 [2024-12-05 20:53:42.713500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.445 [2024-12-05 20:53:42.713514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.445 [2024-12-05 20:53:42.725500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.445 [2024-12-05 20:53:42.725513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.445 [2024-12-05 20:53:42.737496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.445 [2024-12-05 20:53:42.737506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.445 [2024-12-05 20:53:42.749499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.445 [2024-12-05 20:53:42.749510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.445 [2024-12-05 20:53:42.761498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.445 [2024-12-05 20:53:42.761508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.445 [2024-12-05 20:53:42.773497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:49.445 [2024-12-05 20:53:42.773508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (597027) - No such process 00:33:49.445 20:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 597027 00:33:49.445 20:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:49.445 20:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.445 20:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:49.445 20:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.445 20:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:49.445 20:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.445 20:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:49.445 delay0 00:33:49.445 20:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.445 20:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:33:49.445 20:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.445 20:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:49.445 20:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.445 20:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:33:49.703 [2024-12-05 20:53:42.923629] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:57.819 [2024-12-05 20:53:49.717978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21df0 is same with the state(6) to be set 00:33:57.819 Initializing NVMe Controllers 00:33:57.819 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:57.819 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:57.819 Initialization complete. Launching workers. 00:33:57.819 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 296, failed: 12681 00:33:57.819 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 12897, failed to submit 80 00:33:57.819 success 12791, unsuccessful 106, failed 0 00:33:57.819 20:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:33:57.819 20:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:33:57.819 20:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:57.819 20:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:33:57.819 20:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:57.819 20:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:33:57.819 20:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:57.819 20:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:57.819 rmmod nvme_tcp 00:33:57.819 rmmod nvme_fabrics 00:33:57.819 rmmod nvme_keyring 00:33:57.819 20:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:57.819 20:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:33:57.819 20:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:33:57.819 20:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 595155 ']' 00:33:57.819 20:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 595155 00:33:57.819 20:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 595155 ']' 00:33:57.819 20:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 595155 00:33:57.819 20:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:33:57.819 20:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:57.819 20:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 595155 00:33:57.819 20:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:57.819 20:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:57.819 20:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 595155' 00:33:57.819 killing process with pid 595155 00:33:57.819 20:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 595155 00:33:57.819 20:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 595155 00:33:57.819 20:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:57.819 20:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:57.819 20:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:57.819 20:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:33:57.819 20:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:33:57.819 20:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:57.819 20:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:33:57.819 20:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:57.819 20:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:57.819 20:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:57.819 20:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:57.820 20:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:58.757 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:58.757 00:33:58.757 real 0m31.953s 00:33:58.757 user 0m41.206s 00:33:58.757 sys 0m12.839s 00:33:58.757 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:58.757 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:58.757 ************************************ 00:33:58.757 END TEST nvmf_zcopy 00:33:58.757 ************************************ 00:33:58.757 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:58.757 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:58.757 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:58.757 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:58.757 ************************************ 00:33:58.757 START TEST nvmf_nmic 00:33:58.757 ************************************ 00:33:58.757 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:59.018 * Looking for test storage... 00:33:59.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:59.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.018 --rc genhtml_branch_coverage=1 00:33:59.018 --rc genhtml_function_coverage=1 00:33:59.018 --rc genhtml_legend=1 00:33:59.018 --rc geninfo_all_blocks=1 00:33:59.018 --rc geninfo_unexecuted_blocks=1 00:33:59.018 00:33:59.018 ' 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:59.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.018 --rc genhtml_branch_coverage=1 00:33:59.018 --rc genhtml_function_coverage=1 00:33:59.018 --rc genhtml_legend=1 00:33:59.018 --rc geninfo_all_blocks=1 00:33:59.018 --rc geninfo_unexecuted_blocks=1 00:33:59.018 00:33:59.018 ' 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:59.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.018 --rc genhtml_branch_coverage=1 00:33:59.018 --rc genhtml_function_coverage=1 00:33:59.018 --rc genhtml_legend=1 00:33:59.018 --rc geninfo_all_blocks=1 00:33:59.018 --rc geninfo_unexecuted_blocks=1 00:33:59.018 00:33:59.018 ' 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:59.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.018 --rc genhtml_branch_coverage=1 00:33:59.018 --rc genhtml_function_coverage=1 00:33:59.018 --rc genhtml_legend=1 00:33:59.018 --rc geninfo_all_blocks=1 00:33:59.018 --rc geninfo_unexecuted_blocks=1 00:33:59.018 00:33:59.018 ' 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:59.018 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:33:59.019 20:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:05.592 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:05.592 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:05.592 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:05.593 Found net devices under 0000:af:00.0: cvl_0_0 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:05.593 Found net devices under 0000:af:00.1: cvl_0_1 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:05.593 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:05.593 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:34:05.593 00:34:05.593 --- 10.0.0.2 ping statistics --- 00:34:05.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:05.593 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:05.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:05.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:34:05.593 00:34:05.593 --- 10.0.0.1 ping statistics --- 00:34:05.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:05.593 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=602843 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 602843 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 602843 ']' 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:05.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:05.593 20:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.593 [2024-12-05 20:53:58.404321] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:05.593 [2024-12-05 20:53:58.405129] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:34:05.593 [2024-12-05 20:53:58.405156] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:05.593 [2024-12-05 20:53:58.481477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:05.593 [2024-12-05 20:53:58.521679] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:05.593 [2024-12-05 20:53:58.521720] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:05.593 [2024-12-05 20:53:58.521727] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:05.593 [2024-12-05 20:53:58.521733] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:05.593 [2024-12-05 20:53:58.521740] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:05.593 [2024-12-05 20:53:58.523265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:05.593 [2024-12-05 20:53:58.523380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:05.593 [2024-12-05 20:53:58.523494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:05.593 [2024-12-05 20:53:58.523495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:05.593 [2024-12-05 20:53:58.589849] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:05.593 [2024-12-05 20:53:58.590694] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:05.593 [2024-12-05 20:53:58.590763] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:05.593 [2024-12-05 20:53:58.590952] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:05.593 [2024-12-05 20:53:58.591004] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:05.852 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:05.852 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:34:05.852 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:05.852 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:05.852 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.852 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:05.852 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:05.852 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.852 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.852 [2024-12-05 20:53:59.260729] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:05.852 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.852 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:05.852 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.852 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:06.111 Malloc0 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:06.111 [2024-12-05 20:53:59.340355] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:34:06.111 test case1: single bdev can't be used in multiple subsystems 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:06.111 [2024-12-05 20:53:59.363871] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:34:06.111 [2024-12-05 20:53:59.363895] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:34:06.111 [2024-12-05 20:53:59.363904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.111 request: 00:34:06.111 { 00:34:06.111 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:34:06.111 "namespace": { 00:34:06.111 "bdev_name": "Malloc0", 00:34:06.111 "no_auto_visible": false, 00:34:06.111 "hide_metadata": false 00:34:06.111 }, 00:34:06.111 "method": "nvmf_subsystem_add_ns", 00:34:06.111 "req_id": 1 00:34:06.111 } 00:34:06.111 Got JSON-RPC error response 00:34:06.111 response: 00:34:06.111 { 00:34:06.111 "code": -32602, 00:34:06.111 "message": "Invalid parameters" 00:34:06.111 } 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:34:06.111 Adding namespace failed - expected result. 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:34:06.111 test case2: host connect to nvmf target in multiple paths 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:06.111 [2024-12-05 20:53:59.371923] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.111 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:06.370 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:34:06.630 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:34:06.630 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:34:06.630 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:06.630 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:34:06.630 20:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:34:08.536 20:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:08.536 20:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:08.536 20:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:08.536 20:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:34:08.536 20:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:08.536 20:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:34:08.536 20:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:08.536 [global] 00:34:08.536 thread=1 00:34:08.536 invalidate=1 00:34:08.536 rw=write 00:34:08.536 time_based=1 00:34:08.536 runtime=1 00:34:08.536 ioengine=libaio 00:34:08.536 direct=1 00:34:08.536 bs=4096 00:34:08.536 iodepth=1 00:34:08.536 norandommap=0 00:34:08.536 numjobs=1 00:34:08.536 00:34:08.536 verify_dump=1 00:34:08.536 verify_backlog=512 00:34:08.536 verify_state_save=0 00:34:08.536 do_verify=1 00:34:08.536 verify=crc32c-intel 00:34:08.536 [job0] 00:34:08.536 filename=/dev/nvme0n1 00:34:08.536 Could not set queue depth (nvme0n1) 00:34:08.794 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:08.794 fio-3.35 00:34:08.794 Starting 1 thread 00:34:10.171 00:34:10.171 job0: (groupid=0, jobs=1): err= 0: pid=603665: Thu Dec 5 20:54:03 2024 00:34:10.171 read: IOPS=2717, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1001msec) 00:34:10.171 slat (nsec): min=6365, max=29731, avg=7169.47, stdev=1098.53 00:34:10.171 clat (usec): min=172, max=261, avg=198.12, stdev=27.12 00:34:10.171 lat (usec): min=179, max=268, avg=205.29, stdev=27.12 00:34:10.171 clat percentiles (usec): 00:34:10.171 | 1.00th=[ 176], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 182], 00:34:10.171 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 188], 00:34:10.171 | 70.00th=[ 190], 80.00th=[ 243], 90.00th=[ 249], 95.00th=[ 251], 00:34:10.171 | 99.00th=[ 255], 99.50th=[ 258], 99.90th=[ 260], 99.95th=[ 262], 00:34:10.171 | 99.99th=[ 262] 00:34:10.171 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:34:10.171 slat (nsec): min=9073, max=53567, avg=10044.60, stdev=1337.55 00:34:10.171 clat (usec): min=113, max=370, avg=129.93, stdev=15.17 00:34:10.171 lat (usec): min=123, max=424, avg=139.98, stdev=15.43 00:34:10.171 clat percentiles (usec): 00:34:10.171 | 1.00th=[ 119], 5.00th=[ 122], 10.00th=[ 124], 20.00th=[ 126], 00:34:10.171 | 30.00th=[ 127], 40.00th=[ 128], 50.00th=[ 128], 60.00th=[ 129], 00:34:10.171 | 70.00th=[ 130], 80.00th=[ 133], 90.00th=[ 135], 95.00th=[ 137], 00:34:10.171 | 99.00th=[ 243], 99.50th=[ 245], 99.90th=[ 245], 99.95th=[ 247], 00:34:10.171 | 99.99th=[ 371] 00:34:10.171 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:34:10.171 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:34:10.171 lat (usec) : 250=97.00%, 500=3.00% 00:34:10.171 cpu : usr=2.50%, sys=5.40%, ctx=5792, majf=0, minf=1 00:34:10.171 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:10.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.171 issued rwts: total=2720,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.171 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:10.171 00:34:10.171 Run status group 0 (all jobs): 00:34:10.171 READ: bw=10.6MiB/s (11.1MB/s), 10.6MiB/s-10.6MiB/s (11.1MB/s-11.1MB/s), io=10.6MiB (11.1MB), run=1001-1001msec 00:34:10.171 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:34:10.171 00:34:10.171 Disk stats (read/write): 00:34:10.171 nvme0n1: ios=2607/2560, merge=0/0, ticks=509/320, in_queue=829, util=91.58% 00:34:10.171 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:10.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:10.171 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:10.171 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:34:10.171 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:10.171 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:10.171 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:10.171 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:10.171 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:34:10.171 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:10.171 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:34:10.171 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:10.171 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:34:10.171 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:10.171 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:34:10.171 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:10.171 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:10.171 rmmod nvme_tcp 00:34:10.171 rmmod nvme_fabrics 00:34:10.171 rmmod nvme_keyring 00:34:10.171 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:10.171 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:34:10.171 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:34:10.171 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 602843 ']' 00:34:10.171 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 602843 00:34:10.171 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 602843 ']' 00:34:10.171 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 602843 00:34:10.171 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:34:10.171 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:10.171 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 602843 00:34:10.428 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:10.428 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:10.428 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 602843' 00:34:10.428 killing process with pid 602843 00:34:10.428 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 602843 00:34:10.428 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 602843 00:34:10.428 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:10.428 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:10.428 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:10.428 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:34:10.428 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:10.428 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:34:10.428 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:34:10.429 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:10.429 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:10.429 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.429 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:10.429 20:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:12.958 20:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:12.958 00:34:12.958 real 0m13.754s 00:34:12.958 user 0m26.900s 00:34:12.958 sys 0m6.291s 00:34:12.958 20:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:12.958 20:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:12.958 ************************************ 00:34:12.958 END TEST nvmf_nmic 00:34:12.958 ************************************ 00:34:12.958 20:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:12.958 20:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:12.958 20:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:12.958 20:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:12.958 ************************************ 00:34:12.958 START TEST nvmf_fio_target 00:34:12.958 ************************************ 00:34:12.958 20:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:12.958 * Looking for test storage... 00:34:12.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:12.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.958 --rc genhtml_branch_coverage=1 00:34:12.958 --rc genhtml_function_coverage=1 00:34:12.958 --rc genhtml_legend=1 00:34:12.958 --rc geninfo_all_blocks=1 00:34:12.958 --rc geninfo_unexecuted_blocks=1 00:34:12.958 00:34:12.958 ' 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:12.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.958 --rc genhtml_branch_coverage=1 00:34:12.958 --rc genhtml_function_coverage=1 00:34:12.958 --rc genhtml_legend=1 00:34:12.958 --rc geninfo_all_blocks=1 00:34:12.958 --rc geninfo_unexecuted_blocks=1 00:34:12.958 00:34:12.958 ' 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:12.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.958 --rc genhtml_branch_coverage=1 00:34:12.958 --rc genhtml_function_coverage=1 00:34:12.958 --rc genhtml_legend=1 00:34:12.958 --rc geninfo_all_blocks=1 00:34:12.958 --rc geninfo_unexecuted_blocks=1 00:34:12.958 00:34:12.958 ' 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:12.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.958 --rc genhtml_branch_coverage=1 00:34:12.958 --rc genhtml_function_coverage=1 00:34:12.958 --rc genhtml_legend=1 00:34:12.958 --rc geninfo_all_blocks=1 00:34:12.958 --rc geninfo_unexecuted_blocks=1 00:34:12.958 00:34:12.958 ' 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:34:12.958 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:12.959 20:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:19.530 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:19.530 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:19.530 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:19.531 Found net devices under 0000:af:00.0: cvl_0_0 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:19.531 Found net devices under 0000:af:00.1: cvl_0_1 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:19.531 20:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:19.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:19.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:34:19.531 00:34:19.531 --- 10.0.0.2 ping statistics --- 00:34:19.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:19.531 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:34:19.531 20:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:19.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:19.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:34:19.531 00:34:19.531 --- 10.0.0.1 ping statistics --- 00:34:19.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:19.531 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:34:19.531 20:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:19.531 20:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:34:19.531 20:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:19.531 20:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:19.531 20:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:19.531 20:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:19.531 20:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:19.531 20:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:19.531 20:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:19.531 20:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:19.531 20:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:19.531 20:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:19.531 20:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:19.531 20:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=607970 00:34:19.531 20:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:19.531 20:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 607970 00:34:19.531 20:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 607970 ']' 00:34:19.531 20:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:19.531 20:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:19.531 20:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:19.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:19.531 20:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:19.531 20:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:19.531 [2024-12-05 20:54:12.114589] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:19.531 [2024-12-05 20:54:12.115557] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:34:19.531 [2024-12-05 20:54:12.115598] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:19.532 [2024-12-05 20:54:12.193272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:19.532 [2024-12-05 20:54:12.233807] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:19.532 [2024-12-05 20:54:12.233843] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:19.532 [2024-12-05 20:54:12.233850] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:19.532 [2024-12-05 20:54:12.233855] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:19.532 [2024-12-05 20:54:12.233861] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:19.532 [2024-12-05 20:54:12.235278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:19.532 [2024-12-05 20:54:12.235381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:19.532 [2024-12-05 20:54:12.235493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:19.532 [2024-12-05 20:54:12.235495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:19.532 [2024-12-05 20:54:12.303758] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:19.532 [2024-12-05 20:54:12.304637] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:19.532 [2024-12-05 20:54:12.304659] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:19.532 [2024-12-05 20:54:12.304897] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:19.532 [2024-12-05 20:54:12.304946] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:19.532 20:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:19.532 20:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:34:19.532 20:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:19.532 20:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:19.532 20:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:19.532 20:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:19.532 20:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:19.791 [2024-12-05 20:54:13.116161] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:19.791 20:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:20.050 20:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:20.050 20:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:20.309 20:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:20.309 20:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:20.568 20:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:20.568 20:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:20.568 20:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:20.568 20:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:20.827 20:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:21.086 20:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:21.086 20:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:21.086 20:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:21.086 20:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:21.345 20:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:21.345 20:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:21.603 20:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:21.879 20:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:21.879 20:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:21.879 20:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:21.879 20:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:22.138 20:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:22.138 [2024-12-05 20:54:15.560053] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:22.397 20:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:22.397 20:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:22.656 20:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:22.915 20:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:22.915 20:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:34:22.915 20:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:22.915 20:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:34:22.915 20:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:34:22.915 20:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:34:24.818 20:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:24.818 20:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:24.818 20:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:24.818 20:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:34:24.818 20:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:24.818 20:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:34:24.818 20:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:24.818 [global] 00:34:24.818 thread=1 00:34:24.818 invalidate=1 00:34:24.818 rw=write 00:34:24.818 time_based=1 00:34:24.818 runtime=1 00:34:24.818 ioengine=libaio 00:34:24.818 direct=1 00:34:24.818 bs=4096 00:34:24.818 iodepth=1 00:34:24.818 norandommap=0 00:34:24.818 numjobs=1 00:34:24.818 00:34:24.818 verify_dump=1 00:34:24.818 verify_backlog=512 00:34:24.818 verify_state_save=0 00:34:24.818 do_verify=1 00:34:24.818 verify=crc32c-intel 00:34:24.818 [job0] 00:34:24.818 filename=/dev/nvme0n1 00:34:24.818 [job1] 00:34:24.818 filename=/dev/nvme0n2 00:34:24.818 [job2] 00:34:24.818 filename=/dev/nvme0n3 00:34:24.818 [job3] 00:34:24.818 filename=/dev/nvme0n4 00:34:25.091 Could not set queue depth (nvme0n1) 00:34:25.091 Could not set queue depth (nvme0n2) 00:34:25.091 Could not set queue depth (nvme0n3) 00:34:25.091 Could not set queue depth (nvme0n4) 00:34:25.348 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:25.348 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:25.348 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:25.348 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:25.348 fio-3.35 00:34:25.348 Starting 4 threads 00:34:26.716 00:34:26.716 job0: (groupid=0, jobs=1): err= 0: pid=609305: Thu Dec 5 20:54:19 2024 00:34:26.716 read: IOPS=22, BW=89.0KiB/s (91.1kB/s)(92.0KiB/1034msec) 00:34:26.716 slat (nsec): min=17612, max=27735, avg=22400.26, stdev=1653.39 00:34:26.716 clat (usec): min=40759, max=41861, avg=41020.27, stdev=200.74 00:34:26.716 lat (usec): min=40782, max=41889, avg=41042.67, stdev=201.86 00:34:26.716 clat percentiles (usec): 00:34:26.716 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:26.716 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:26.716 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:26.716 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:34:26.716 | 99.99th=[41681] 00:34:26.716 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:34:26.716 slat (usec): min=10, max=143, avg=13.83, stdev= 7.01 00:34:26.716 clat (usec): min=129, max=282, avg=158.60, stdev=13.25 00:34:26.716 lat (usec): min=148, max=422, avg=172.43, stdev=16.81 00:34:26.716 clat percentiles (usec): 00:34:26.716 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 149], 00:34:26.716 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 159], 00:34:26.716 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 174], 95.00th=[ 180], 00:34:26.716 | 99.00th=[ 192], 99.50th=[ 206], 99.90th=[ 281], 99.95th=[ 281], 00:34:26.716 | 99.99th=[ 281] 00:34:26.716 bw ( KiB/s): min= 4096, max= 4096, per=15.77%, avg=4096.00, stdev= 0.00, samples=1 00:34:26.716 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:26.716 lat (usec) : 250=95.33%, 500=0.37% 00:34:26.716 lat (msec) : 50=4.30% 00:34:26.716 cpu : usr=0.58%, sys=0.68%, ctx=536, majf=0, minf=2 00:34:26.716 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:26.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.716 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.716 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:26.716 job1: (groupid=0, jobs=1): err= 0: pid=609306: Thu Dec 5 20:54:19 2024 00:34:26.716 read: IOPS=2228, BW=8915KiB/s (9129kB/s)(8924KiB/1001msec) 00:34:26.716 slat (nsec): min=8400, max=47469, avg=9506.96, stdev=1775.99 00:34:26.716 clat (usec): min=167, max=552, avg=241.30, stdev=69.48 00:34:26.716 lat (usec): min=176, max=561, avg=250.81, stdev=69.54 00:34:26.716 clat percentiles (usec): 00:34:26.716 | 1.00th=[ 186], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 206], 00:34:26.716 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 221], 00:34:26.717 | 70.00th=[ 235], 80.00th=[ 255], 90.00th=[ 293], 95.00th=[ 469], 00:34:26.717 | 99.00th=[ 494], 99.50th=[ 502], 99.90th=[ 523], 99.95th=[ 537], 00:34:26.717 | 99.99th=[ 553] 00:34:26.717 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:34:26.717 slat (nsec): min=12085, max=46366, avg=13426.25, stdev=1793.72 00:34:26.717 clat (usec): min=127, max=288, avg=152.57, stdev=19.94 00:34:26.717 lat (usec): min=141, max=334, avg=166.00, stdev=20.17 00:34:26.717 clat percentiles (usec): 00:34:26.717 | 1.00th=[ 131], 5.00th=[ 135], 10.00th=[ 135], 20.00th=[ 137], 00:34:26.717 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 145], 60.00th=[ 153], 00:34:26.717 | 70.00th=[ 161], 80.00th=[ 169], 90.00th=[ 180], 95.00th=[ 190], 00:34:26.717 | 99.00th=[ 219], 99.50th=[ 233], 99.90th=[ 253], 99.95th=[ 260], 00:34:26.717 | 99.99th=[ 289] 00:34:26.717 bw ( KiB/s): min=12288, max=12288, per=47.30%, avg=12288.00, stdev= 0.00, samples=1 00:34:26.717 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:34:26.717 lat (usec) : 250=89.19%, 500=10.48%, 750=0.33% 00:34:26.717 cpu : usr=4.20%, sys=8.60%, ctx=4793, majf=0, minf=1 00:34:26.717 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:26.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.717 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.717 issued rwts: total=2231,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.717 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:26.717 job2: (groupid=0, jobs=1): err= 0: pid=609307: Thu Dec 5 20:54:19 2024 00:34:26.717 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:34:26.717 slat (nsec): min=7023, max=36888, avg=8414.09, stdev=2630.97 00:34:26.717 clat (usec): min=190, max=41511, avg=760.40, stdev=4569.15 00:34:26.717 lat (usec): min=198, max=41523, avg=768.81, stdev=4570.29 00:34:26.717 clat percentiles (usec): 00:34:26.717 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 210], 00:34:26.717 | 30.00th=[ 219], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 245], 00:34:26.717 | 70.00th=[ 251], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 289], 00:34:26.717 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:34:26.717 | 99.99th=[41681] 00:34:26.717 write: IOPS=1082, BW=4332KiB/s (4436kB/s)(4336KiB/1001msec); 0 zone resets 00:34:26.717 slat (nsec): min=9784, max=48127, avg=13525.56, stdev=4683.52 00:34:26.717 clat (usec): min=120, max=346, avg=177.50, stdev=27.85 00:34:26.717 lat (usec): min=130, max=362, avg=191.03, stdev=29.80 00:34:26.717 clat percentiles (usec): 00:34:26.717 | 1.00th=[ 130], 5.00th=[ 139], 10.00th=[ 145], 20.00th=[ 151], 00:34:26.717 | 30.00th=[ 157], 40.00th=[ 169], 50.00th=[ 182], 60.00th=[ 188], 00:34:26.717 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 210], 95.00th=[ 227], 00:34:26.717 | 99.00th=[ 245], 99.50th=[ 260], 99.90th=[ 302], 99.95th=[ 347], 00:34:26.717 | 99.99th=[ 347] 00:34:26.717 bw ( KiB/s): min= 4096, max= 4096, per=15.77%, avg=4096.00, stdev= 0.00, samples=1 00:34:26.717 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:26.717 lat (usec) : 250=84.35%, 500=14.90%, 750=0.14% 00:34:26.717 lat (msec) : 50=0.62% 00:34:26.717 cpu : usr=1.10%, sys=2.80%, ctx=2109, majf=0, minf=1 00:34:26.717 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:26.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.717 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.717 issued rwts: total=1024,1084,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.717 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:26.717 job3: (groupid=0, jobs=1): err= 0: pid=609308: Thu Dec 5 20:54:19 2024 00:34:26.717 read: IOPS=2318, BW=9275KiB/s (9497kB/s)(9284KiB/1001msec) 00:34:26.717 slat (nsec): min=7580, max=23425, avg=8733.09, stdev=1126.19 00:34:26.717 clat (usec): min=166, max=505, avg=232.98, stdev=48.58 00:34:26.717 lat (usec): min=175, max=514, avg=241.72, stdev=48.65 00:34:26.717 clat percentiles (usec): 00:34:26.717 | 1.00th=[ 186], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 208], 00:34:26.717 | 30.00th=[ 210], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 221], 00:34:26.717 | 70.00th=[ 237], 80.00th=[ 251], 90.00th=[ 273], 95.00th=[ 306], 00:34:26.717 | 99.00th=[ 478], 99.50th=[ 482], 99.90th=[ 502], 99.95th=[ 502], 00:34:26.717 | 99.99th=[ 506] 00:34:26.717 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:34:26.717 slat (nsec): min=4295, max=31179, avg=12077.83, stdev=2204.76 00:34:26.717 clat (usec): min=126, max=261, avg=153.70, stdev=19.21 00:34:26.717 lat (usec): min=138, max=284, avg=165.77, stdev=19.00 00:34:26.717 clat percentiles (usec): 00:34:26.717 | 1.00th=[ 133], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 139], 00:34:26.717 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 153], 00:34:26.717 | 70.00th=[ 163], 80.00th=[ 172], 90.00th=[ 182], 95.00th=[ 192], 00:34:26.717 | 99.00th=[ 212], 99.50th=[ 219], 99.90th=[ 239], 99.95th=[ 260], 00:34:26.717 | 99.99th=[ 262] 00:34:26.717 bw ( KiB/s): min=12288, max=12288, per=47.30%, avg=12288.00, stdev= 0.00, samples=1 00:34:26.717 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:34:26.717 lat (usec) : 250=89.90%, 500=10.04%, 750=0.06% 00:34:26.717 cpu : usr=4.50%, sys=7.50%, ctx=4883, majf=0, minf=1 00:34:26.717 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:26.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.717 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.717 issued rwts: total=2321,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.717 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:26.717 00:34:26.717 Run status group 0 (all jobs): 00:34:26.717 READ: bw=21.2MiB/s (22.2MB/s), 89.0KiB/s-9275KiB/s (91.1kB/s-9497kB/s), io=21.9MiB (22.9MB), run=1001-1034msec 00:34:26.717 WRITE: bw=25.4MiB/s (26.6MB/s), 1981KiB/s-9.99MiB/s (2028kB/s-10.5MB/s), io=26.2MiB (27.5MB), run=1001-1034msec 00:34:26.717 00:34:26.717 Disk stats (read/write): 00:34:26.717 nvme0n1: ios=68/512, merge=0/0, ticks=757/76, in_queue=833, util=87.17% 00:34:26.717 nvme0n2: ios=2054/2048, merge=0/0, ticks=1356/274, in_queue=1630, util=90.05% 00:34:26.717 nvme0n3: ios=548/948, merge=0/0, ticks=1651/160, in_queue=1811, util=93.54% 00:34:26.717 nvme0n4: ios=2073/2152, merge=0/0, ticks=1346/297, in_queue=1643, util=94.23% 00:34:26.717 20:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:26.717 [global] 00:34:26.717 thread=1 00:34:26.717 invalidate=1 00:34:26.717 rw=randwrite 00:34:26.717 time_based=1 00:34:26.717 runtime=1 00:34:26.717 ioengine=libaio 00:34:26.717 direct=1 00:34:26.717 bs=4096 00:34:26.717 iodepth=1 00:34:26.717 norandommap=0 00:34:26.717 numjobs=1 00:34:26.717 00:34:26.717 verify_dump=1 00:34:26.717 verify_backlog=512 00:34:26.717 verify_state_save=0 00:34:26.717 do_verify=1 00:34:26.717 verify=crc32c-intel 00:34:26.717 [job0] 00:34:26.717 filename=/dev/nvme0n1 00:34:26.717 [job1] 00:34:26.717 filename=/dev/nvme0n2 00:34:26.717 [job2] 00:34:26.717 filename=/dev/nvme0n3 00:34:26.717 [job3] 00:34:26.717 filename=/dev/nvme0n4 00:34:26.717 Could not set queue depth (nvme0n1) 00:34:26.717 Could not set queue depth (nvme0n2) 00:34:26.717 Could not set queue depth (nvme0n3) 00:34:26.717 Could not set queue depth (nvme0n4) 00:34:27.027 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:27.027 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:27.027 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:27.027 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:27.027 fio-3.35 00:34:27.027 Starting 4 threads 00:34:28.031 00:34:28.031 job0: (groupid=0, jobs=1): err= 0: pid=609727: Thu Dec 5 20:54:21 2024 00:34:28.031 read: IOPS=886, BW=3546KiB/s (3631kB/s)(3596KiB/1014msec) 00:34:28.031 slat (nsec): min=6714, max=26080, avg=8031.96, stdev=2221.25 00:34:28.031 clat (usec): min=193, max=41977, avg=906.85, stdev=5242.98 00:34:28.031 lat (usec): min=200, max=42000, avg=914.88, stdev=5244.68 00:34:28.031 clat percentiles (usec): 00:34:28.031 | 1.00th=[ 204], 5.00th=[ 208], 10.00th=[ 210], 20.00th=[ 215], 00:34:28.031 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 223], 60.00th=[ 227], 00:34:28.031 | 70.00th=[ 229], 80.00th=[ 233], 90.00th=[ 241], 95.00th=[ 251], 00:34:28.031 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:34:28.031 | 99.99th=[42206] 00:34:28.031 write: IOPS=1009, BW=4039KiB/s (4136kB/s)(4096KiB/1014msec); 0 zone resets 00:34:28.031 slat (nsec): min=9624, max=59419, avg=10673.73, stdev=1829.13 00:34:28.031 clat (usec): min=126, max=273, avg=171.34, stdev=21.91 00:34:28.031 lat (usec): min=137, max=321, avg=182.02, stdev=22.26 00:34:28.031 clat percentiles (usec): 00:34:28.031 | 1.00th=[ 131], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 153], 00:34:28.031 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 182], 00:34:28.031 | 70.00th=[ 186], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 200], 00:34:28.031 | 99.00th=[ 255], 99.50th=[ 255], 99.90th=[ 265], 99.95th=[ 273], 00:34:28.031 | 99.99th=[ 273] 00:34:28.031 bw ( KiB/s): min= 4096, max= 4096, per=22.53%, avg=4096.00, stdev= 0.00, samples=2 00:34:28.031 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:34:28.031 lat (usec) : 250=96.83%, 500=2.39% 00:34:28.031 lat (msec) : 50=0.78% 00:34:28.031 cpu : usr=0.99%, sys=1.78%, ctx=1925, majf=0, minf=1 00:34:28.031 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:28.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.031 issued rwts: total=899,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:28.031 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:28.031 job1: (groupid=0, jobs=1): err= 0: pid=609728: Thu Dec 5 20:54:21 2024 00:34:28.031 read: IOPS=2300, BW=9203KiB/s (9424kB/s)(9212KiB/1001msec) 00:34:28.031 slat (nsec): min=6560, max=30539, avg=7333.52, stdev=815.93 00:34:28.031 clat (usec): min=179, max=593, avg=228.52, stdev=24.40 00:34:28.031 lat (usec): min=187, max=600, avg=235.85, stdev=24.38 00:34:28.031 clat percentiles (usec): 00:34:28.031 | 1.00th=[ 188], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 202], 00:34:28.031 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 233], 60.00th=[ 243], 00:34:28.031 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 255], 00:34:28.031 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 396], 99.95th=[ 416], 00:34:28.031 | 99.99th=[ 594] 00:34:28.031 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:34:28.031 slat (usec): min=4, max=15954, avg=16.30, stdev=315.13 00:34:28.031 clat (usec): min=119, max=343, avg=158.21, stdev=20.59 00:34:28.031 lat (usec): min=129, max=16230, avg=174.51, stdev=318.11 00:34:28.031 clat percentiles (usec): 00:34:28.031 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 00:34:28.031 | 30.00th=[ 143], 40.00th=[ 151], 50.00th=[ 157], 60.00th=[ 163], 00:34:28.031 | 70.00th=[ 169], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 192], 00:34:28.031 | 99.00th=[ 210], 99.50th=[ 219], 99.90th=[ 241], 99.95th=[ 277], 00:34:28.031 | 99.99th=[ 343] 00:34:28.031 bw ( KiB/s): min=12288, max=12288, per=67.60%, avg=12288.00, stdev= 0.00, samples=1 00:34:28.031 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:34:28.031 lat (usec) : 250=92.35%, 500=7.63%, 750=0.02% 00:34:28.031 cpu : usr=2.00%, sys=4.70%, ctx=4867, majf=0, minf=1 00:34:28.031 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:28.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.032 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.032 issued rwts: total=2303,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:28.032 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:28.032 job2: (groupid=0, jobs=1): err= 0: pid=609731: Thu Dec 5 20:54:21 2024 00:34:28.032 read: IOPS=375, BW=1502KiB/s (1539kB/s)(1516KiB/1009msec) 00:34:28.032 slat (nsec): min=8553, max=30550, avg=11739.13, stdev=3804.91 00:34:28.032 clat (usec): min=202, max=42458, avg=2402.25, stdev=9146.58 00:34:28.032 lat (usec): min=212, max=42480, avg=2413.98, stdev=9149.09 00:34:28.032 clat percentiles (usec): 00:34:28.032 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 229], 00:34:28.032 | 30.00th=[ 235], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:34:28.032 | 70.00th=[ 251], 80.00th=[ 260], 90.00th=[ 322], 95.00th=[40633], 00:34:28.032 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:28.032 | 99.99th=[42206] 00:34:28.032 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:34:28.032 slat (nsec): min=9406, max=46995, avg=11141.90, stdev=3084.33 00:34:28.032 clat (usec): min=138, max=280, avg=167.12, stdev=14.02 00:34:28.032 lat (usec): min=148, max=326, avg=178.26, stdev=15.30 00:34:28.032 clat percentiles (usec): 00:34:28.032 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:34:28.032 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:34:28.032 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 192], 00:34:28.032 | 99.00th=[ 215], 99.50th=[ 231], 99.90th=[ 281], 99.95th=[ 281], 00:34:28.032 | 99.99th=[ 281] 00:34:28.032 bw ( KiB/s): min= 4096, max= 4096, per=22.53%, avg=4096.00, stdev= 0.00, samples=1 00:34:28.032 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:28.032 lat (usec) : 250=86.31%, 500=11.45% 00:34:28.032 lat (msec) : 50=2.24% 00:34:28.032 cpu : usr=0.50%, sys=0.99%, ctx=893, majf=0, minf=1 00:34:28.032 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:28.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.032 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.032 issued rwts: total=379,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:28.032 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:28.032 job3: (groupid=0, jobs=1): err= 0: pid=609732: Thu Dec 5 20:54:21 2024 00:34:28.032 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:34:28.032 slat (nsec): min=9282, max=23632, avg=22032.68, stdev=2873.23 00:34:28.032 clat (usec): min=40780, max=42055, avg=41080.57, stdev=319.40 00:34:28.032 lat (usec): min=40803, max=42077, avg=41102.60, stdev=318.73 00:34:28.032 clat percentiles (usec): 00:34:28.032 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:28.032 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:28.032 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:34:28.032 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:28.032 | 99.99th=[42206] 00:34:28.032 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:34:28.032 slat (nsec): min=9042, max=44178, avg=10015.95, stdev=1795.57 00:34:28.032 clat (usec): min=136, max=312, avg=183.25, stdev=18.30 00:34:28.032 lat (usec): min=145, max=356, avg=193.26, stdev=18.85 00:34:28.032 clat percentiles (usec): 00:34:28.032 | 1.00th=[ 145], 5.00th=[ 157], 10.00th=[ 165], 20.00th=[ 172], 00:34:28.032 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 186], 00:34:28.032 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 210], 00:34:28.032 | 99.00th=[ 235], 99.50th=[ 297], 99.90th=[ 314], 99.95th=[ 314], 00:34:28.032 | 99.99th=[ 314] 00:34:28.032 bw ( KiB/s): min= 4096, max= 4096, per=22.53%, avg=4096.00, stdev= 0.00, samples=1 00:34:28.032 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:28.032 lat (usec) : 250=95.13%, 500=0.75% 00:34:28.032 lat (msec) : 50=4.12% 00:34:28.032 cpu : usr=0.30%, sys=0.50%, ctx=534, majf=0, minf=1 00:34:28.032 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:28.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.032 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.032 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:28.032 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:28.032 00:34:28.032 Run status group 0 (all jobs): 00:34:28.032 READ: bw=13.9MiB/s (14.6MB/s), 87.6KiB/s-9203KiB/s (89.7kB/s-9424kB/s), io=14.1MiB (14.8MB), run=1001-1014msec 00:34:28.032 WRITE: bw=17.8MiB/s (18.6MB/s), 2030KiB/s-9.99MiB/s (2078kB/s-10.5MB/s), io=18.0MiB (18.9MB), run=1001-1014msec 00:34:28.032 00:34:28.032 Disk stats (read/write): 00:34:28.032 nvme0n1: ios=540/895, merge=0/0, ticks=1550/152, in_queue=1702, util=86.17% 00:34:28.032 nvme0n2: ios=2066/2048, merge=0/0, ticks=1364/314, in_queue=1678, util=90.15% 00:34:28.032 nvme0n3: ios=398/512, merge=0/0, ticks=1651/81, in_queue=1732, util=93.56% 00:34:28.032 nvme0n4: ios=75/512, merge=0/0, ticks=812/91, in_queue=903, util=95.39% 00:34:28.032 20:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:28.032 [global] 00:34:28.032 thread=1 00:34:28.032 invalidate=1 00:34:28.032 rw=write 00:34:28.032 time_based=1 00:34:28.032 runtime=1 00:34:28.032 ioengine=libaio 00:34:28.032 direct=1 00:34:28.032 bs=4096 00:34:28.032 iodepth=128 00:34:28.032 norandommap=0 00:34:28.032 numjobs=1 00:34:28.032 00:34:28.032 verify_dump=1 00:34:28.032 verify_backlog=512 00:34:28.032 verify_state_save=0 00:34:28.032 do_verify=1 00:34:28.032 verify=crc32c-intel 00:34:28.032 [job0] 00:34:28.032 filename=/dev/nvme0n1 00:34:28.032 [job1] 00:34:28.032 filename=/dev/nvme0n2 00:34:28.032 [job2] 00:34:28.032 filename=/dev/nvme0n3 00:34:28.032 [job3] 00:34:28.032 filename=/dev/nvme0n4 00:34:28.298 Could not set queue depth (nvme0n1) 00:34:28.298 Could not set queue depth (nvme0n2) 00:34:28.298 Could not set queue depth (nvme0n3) 00:34:28.298 Could not set queue depth (nvme0n4) 00:34:28.591 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:28.591 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:28.591 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:28.591 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:28.591 fio-3.35 00:34:28.591 Starting 4 threads 00:34:29.591 00:34:29.591 job0: (groupid=0, jobs=1): err= 0: pid=610150: Thu Dec 5 20:54:22 2024 00:34:29.591 read: IOPS=6077, BW=23.7MiB/s (24.9MB/s)(23.8MiB/1004msec) 00:34:29.591 slat (nsec): min=976, max=13333k, avg=85119.17, stdev=677112.83 00:34:29.591 clat (usec): min=1266, max=26164, avg=11250.55, stdev=3282.10 00:34:29.591 lat (usec): min=1282, max=29748, avg=11335.67, stdev=3335.91 00:34:29.591 clat percentiles (usec): 00:34:29.591 | 1.00th=[ 3949], 5.00th=[ 7635], 10.00th=[ 8029], 20.00th=[ 8586], 00:34:29.591 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[10290], 60.00th=[11994], 00:34:29.591 | 70.00th=[12780], 80.00th=[13960], 90.00th=[15795], 95.00th=[16909], 00:34:29.591 | 99.00th=[20841], 99.50th=[22938], 99.90th=[23987], 99.95th=[25035], 00:34:29.591 | 99.99th=[26084] 00:34:29.591 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:34:29.591 slat (nsec): min=1642, max=10561k, avg=68117.52, stdev=527756.26 00:34:29.591 clat (usec): min=236, max=24979, avg=9569.18, stdev=3265.73 00:34:29.591 lat (usec): min=265, max=24984, avg=9637.30, stdev=3291.99 00:34:29.591 clat percentiles (usec): 00:34:29.591 | 1.00th=[ 1663], 5.00th=[ 4424], 10.00th=[ 5604], 20.00th=[ 7242], 00:34:29.591 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9634], 00:34:29.591 | 70.00th=[10159], 80.00th=[11731], 90.00th=[13304], 95.00th=[15401], 00:34:29.591 | 99.00th=[19268], 99.50th=[19530], 99.90th=[20579], 99.95th=[22152], 00:34:29.591 | 99.99th=[25035] 00:34:29.591 bw ( KiB/s): min=24576, max=24576, per=33.41%, avg=24576.00, stdev= 0.00, samples=2 00:34:29.591 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:34:29.591 lat (usec) : 250=0.01%, 500=0.01%, 750=0.11%, 1000=0.09% 00:34:29.591 lat (msec) : 2=0.62%, 4=1.89%, 10=54.28%, 20=42.05%, 50=0.94% 00:34:29.591 cpu : usr=3.99%, sys=6.48%, ctx=430, majf=0, minf=2 00:34:29.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:29.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:29.591 issued rwts: total=6102,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:29.591 job1: (groupid=0, jobs=1): err= 0: pid=610151: Thu Dec 5 20:54:22 2024 00:34:29.591 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:34:29.591 slat (nsec): min=1123, max=13715k, avg=127881.77, stdev=855679.23 00:34:29.591 clat (usec): min=5869, max=55463, avg=15902.41, stdev=7188.18 00:34:29.591 lat (usec): min=5874, max=55469, avg=16030.29, stdev=7259.67 00:34:29.591 clat percentiles (usec): 00:34:29.591 | 1.00th=[ 6980], 5.00th=[ 7701], 10.00th=[ 8586], 20.00th=[10552], 00:34:29.591 | 30.00th=[11207], 40.00th=[13304], 50.00th=[14222], 60.00th=[15270], 00:34:29.591 | 70.00th=[17695], 80.00th=[21365], 90.00th=[25560], 95.00th=[29492], 00:34:29.591 | 99.00th=[42730], 99.50th=[47973], 99.90th=[54264], 99.95th=[55313], 00:34:29.591 | 99.99th=[55313] 00:34:29.591 write: IOPS=4137, BW=16.2MiB/s (16.9MB/s)(16.3MiB/1006msec); 0 zone resets 00:34:29.591 slat (nsec): min=1997, max=10341k, avg=108319.69, stdev=714333.92 00:34:29.591 clat (usec): min=1501, max=55469, avg=14995.40, stdev=7445.66 00:34:29.591 lat (usec): min=1515, max=55479, avg=15103.72, stdev=7503.03 00:34:29.591 clat percentiles (usec): 00:34:29.591 | 1.00th=[ 5407], 5.00th=[ 7701], 10.00th=[ 9241], 20.00th=[ 9765], 00:34:29.591 | 30.00th=[ 9896], 40.00th=[10814], 50.00th=[13304], 60.00th=[14877], 00:34:29.591 | 70.00th=[17957], 80.00th=[19006], 90.00th=[23200], 95.00th=[30540], 00:34:29.591 | 99.00th=[45351], 99.50th=[46924], 99.90th=[46924], 99.95th=[55313], 00:34:29.591 | 99.99th=[55313] 00:34:29.591 bw ( KiB/s): min=12288, max=20480, per=22.28%, avg=16384.00, stdev=5792.62, samples=2 00:34:29.591 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:34:29.591 lat (msec) : 2=0.06%, 4=0.01%, 10=25.53%, 20=55.84%, 50=18.38% 00:34:29.591 lat (msec) : 100=0.18% 00:34:29.591 cpu : usr=3.58%, sys=4.18%, ctx=383, majf=0, minf=1 00:34:29.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:34:29.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:29.591 issued rwts: total=4096,4162,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:29.591 job2: (groupid=0, jobs=1): err= 0: pid=610152: Thu Dec 5 20:54:22 2024 00:34:29.591 read: IOPS=4862, BW=19.0MiB/s (19.9MB/s)(19.1MiB/1003msec) 00:34:29.591 slat (nsec): min=1344, max=10294k, avg=102371.39, stdev=599153.39 00:34:29.591 clat (usec): min=738, max=31266, avg=13140.39, stdev=4289.04 00:34:29.591 lat (usec): min=4230, max=31309, avg=13242.76, stdev=4321.78 00:34:29.591 clat percentiles (usec): 00:34:29.591 | 1.00th=[ 6652], 5.00th=[ 8586], 10.00th=[ 9372], 20.00th=[10159], 00:34:29.591 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11600], 60.00th=[12387], 00:34:29.591 | 70.00th=[13566], 80.00th=[17171], 90.00th=[20317], 95.00th=[21890], 00:34:29.591 | 99.00th=[27132], 99.50th=[27132], 99.90th=[28967], 99.95th=[29754], 00:34:29.591 | 99.99th=[31327] 00:34:29.591 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:34:29.591 slat (nsec): min=1997, max=10968k, avg=92087.51, stdev=533003.23 00:34:29.591 clat (usec): min=5789, max=27283, avg=12225.71, stdev=2707.87 00:34:29.591 lat (usec): min=5792, max=27300, avg=12317.80, stdev=2746.81 00:34:29.591 clat percentiles (usec): 00:34:29.591 | 1.00th=[ 7832], 5.00th=[10290], 10.00th=[10552], 20.00th=[10945], 00:34:29.591 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11207], 60.00th=[11338], 00:34:29.591 | 70.00th=[11600], 80.00th=[13304], 90.00th=[16581], 95.00th=[17433], 00:34:29.591 | 99.00th=[22414], 99.50th=[24511], 99.90th=[24773], 99.95th=[25035], 00:34:29.591 | 99.99th=[27395] 00:34:29.591 bw ( KiB/s): min=20439, max=20480, per=27.82%, avg=20459.50, stdev=28.99, samples=2 00:34:29.591 iops : min= 5109, max= 5120, avg=5114.50, stdev= 7.78, samples=2 00:34:29.591 lat (usec) : 750=0.01% 00:34:29.591 lat (msec) : 10=11.15%, 20=82.69%, 50=6.14% 00:34:29.591 cpu : usr=3.59%, sys=6.39%, ctx=516, majf=0, minf=1 00:34:29.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:34:29.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:29.591 issued rwts: total=4877,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:29.591 job3: (groupid=0, jobs=1): err= 0: pid=610153: Thu Dec 5 20:54:22 2024 00:34:29.591 read: IOPS=2978, BW=11.6MiB/s (12.2MB/s)(11.7MiB/1005msec) 00:34:29.591 slat (nsec): min=1784, max=20430k, avg=151636.96, stdev=1083374.42 00:34:29.591 clat (usec): min=3260, max=69324, avg=19287.44, stdev=8367.85 00:34:29.591 lat (usec): min=3908, max=69332, avg=19439.08, stdev=8452.58 00:34:29.591 clat percentiles (usec): 00:34:29.591 | 1.00th=[ 6718], 5.00th=[10814], 10.00th=[12518], 20.00th=[13435], 00:34:29.591 | 30.00th=[14091], 40.00th=[15795], 50.00th=[18220], 60.00th=[20055], 00:34:29.591 | 70.00th=[21627], 80.00th=[23462], 90.00th=[28181], 95.00th=[30016], 00:34:29.591 | 99.00th=[57410], 99.50th=[62129], 99.90th=[69731], 99.95th=[69731], 00:34:29.591 | 99.99th=[69731] 00:34:29.591 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:34:29.591 slat (usec): min=2, max=20013, avg=171.82, stdev=1352.88 00:34:29.591 clat (usec): min=1445, max=69302, avg=22595.84, stdev=11122.43 00:34:29.591 lat (usec): min=1473, max=69312, avg=22767.67, stdev=11260.57 00:34:29.591 clat percentiles (usec): 00:34:29.591 | 1.00th=[ 5735], 5.00th=[ 8848], 10.00th=[11469], 20.00th=[15139], 00:34:29.591 | 30.00th=[17171], 40.00th=[18220], 50.00th=[19268], 60.00th=[20317], 00:34:29.591 | 70.00th=[27395], 80.00th=[29754], 90.00th=[38011], 95.00th=[47973], 00:34:29.591 | 99.00th=[52167], 99.50th=[59507], 99.90th=[63701], 99.95th=[69731], 00:34:29.591 | 99.99th=[69731] 00:34:29.591 bw ( KiB/s): min=12288, max=12288, per=16.71%, avg=12288.00, stdev= 0.00, samples=2 00:34:29.591 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:34:29.591 lat (msec) : 2=0.03%, 4=0.48%, 10=5.24%, 20=53.50%, 50=37.74% 00:34:29.591 lat (msec) : 100=3.00% 00:34:29.591 cpu : usr=2.19%, sys=3.59%, ctx=151, majf=0, minf=1 00:34:29.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:34:29.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:29.591 issued rwts: total=2993,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:29.591 00:34:29.591 Run status group 0 (all jobs): 00:34:29.591 READ: bw=70.2MiB/s (73.6MB/s), 11.6MiB/s-23.7MiB/s (12.2MB/s-24.9MB/s), io=70.6MiB (74.0MB), run=1003-1006msec 00:34:29.591 WRITE: bw=71.8MiB/s (75.3MB/s), 11.9MiB/s-23.9MiB/s (12.5MB/s-25.1MB/s), io=72.3MiB (75.8MB), run=1003-1006msec 00:34:29.591 00:34:29.591 Disk stats (read/write): 00:34:29.591 nvme0n1: ios=5076/5126, merge=0/0, ticks=45550/40778, in_queue=86328, util=86.77% 00:34:29.591 nvme0n2: ios=3634/3833, merge=0/0, ticks=34107/37424, in_queue=71531, util=90.96% 00:34:29.591 nvme0n3: ios=4117/4222, merge=0/0, ticks=23697/20162, in_queue=43859, util=93.03% 00:34:29.591 nvme0n4: ios=2459/2560, merge=0/0, ticks=22380/26704, in_queue=49084, util=95.39% 00:34:29.591 20:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:29.591 [global] 00:34:29.591 thread=1 00:34:29.591 invalidate=1 00:34:29.591 rw=randwrite 00:34:29.591 time_based=1 00:34:29.591 runtime=1 00:34:29.591 ioengine=libaio 00:34:29.591 direct=1 00:34:29.591 bs=4096 00:34:29.591 iodepth=128 00:34:29.591 norandommap=0 00:34:29.591 numjobs=1 00:34:29.591 00:34:29.591 verify_dump=1 00:34:29.591 verify_backlog=512 00:34:29.591 verify_state_save=0 00:34:29.591 do_verify=1 00:34:29.591 verify=crc32c-intel 00:34:29.591 [job0] 00:34:29.591 filename=/dev/nvme0n1 00:34:29.591 [job1] 00:34:29.591 filename=/dev/nvme0n2 00:34:29.591 [job2] 00:34:29.591 filename=/dev/nvme0n3 00:34:29.591 [job3] 00:34:29.591 filename=/dev/nvme0n4 00:34:29.895 Could not set queue depth (nvme0n1) 00:34:29.895 Could not set queue depth (nvme0n2) 00:34:29.895 Could not set queue depth (nvme0n3) 00:34:29.895 Could not set queue depth (nvme0n4) 00:34:30.219 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:30.219 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:30.219 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:30.219 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:30.219 fio-3.35 00:34:30.219 Starting 4 threads 00:34:31.306 00:34:31.306 job0: (groupid=0, jobs=1): err= 0: pid=610578: Thu Dec 5 20:54:24 2024 00:34:31.306 read: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec) 00:34:31.306 slat (nsec): min=1297, max=18374k, avg=79506.25, stdev=600450.34 00:34:31.306 clat (usec): min=2876, max=62781, avg=10170.13, stdev=6587.13 00:34:31.306 lat (usec): min=2890, max=62791, avg=10249.63, stdev=6632.55 00:34:31.306 clat percentiles (usec): 00:34:31.306 | 1.00th=[ 5080], 5.00th=[ 6063], 10.00th=[ 6325], 20.00th=[ 6849], 00:34:31.306 | 30.00th=[ 7701], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[ 9896], 00:34:31.306 | 70.00th=[10421], 80.00th=[11207], 90.00th=[11994], 95.00th=[20841], 00:34:31.306 | 99.00th=[53740], 99.50th=[57934], 99.90th=[61604], 99.95th=[62653], 00:34:31.306 | 99.99th=[62653] 00:34:31.306 write: IOPS=6483, BW=25.3MiB/s (26.6MB/s)(25.4MiB/1004msec); 0 zone resets 00:34:31.306 slat (usec): min=2, max=26371, avg=66.80, stdev=507.92 00:34:31.306 clat (usec): min=154, max=62748, avg=9946.29, stdev=5709.20 00:34:31.306 lat (usec): min=398, max=62752, avg=10013.09, stdev=5726.45 00:34:31.306 clat percentiles (usec): 00:34:31.306 | 1.00th=[ 2704], 5.00th=[ 4359], 10.00th=[ 5211], 20.00th=[ 6456], 00:34:31.306 | 30.00th=[ 7242], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9503], 00:34:31.306 | 70.00th=[ 9765], 80.00th=[10552], 90.00th=[15795], 95.00th=[19268], 00:34:31.306 | 99.00th=[34866], 99.50th=[41681], 99.90th=[45876], 99.95th=[45876], 00:34:31.306 | 99.99th=[62653] 00:34:31.306 bw ( KiB/s): min=25016, max=26032, per=35.14%, avg=25524.00, stdev=718.42, samples=2 00:34:31.306 iops : min= 6254, max= 6508, avg=6381.00, stdev=179.61, samples=2 00:34:31.306 lat (usec) : 250=0.01%, 500=0.04%, 750=0.06% 00:34:31.306 lat (msec) : 2=0.17%, 4=0.90%, 10=67.44%, 20=26.40%, 50=4.43% 00:34:31.306 lat (msec) : 100=0.56% 00:34:31.306 cpu : usr=5.08%, sys=6.78%, ctx=575, majf=0, minf=1 00:34:31.306 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:31.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:31.307 issued rwts: total=6144,6509,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.307 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:31.307 job1: (groupid=0, jobs=1): err= 0: pid=610580: Thu Dec 5 20:54:24 2024 00:34:31.307 read: IOPS=3268, BW=12.8MiB/s (13.4MB/s)(12.8MiB/1006msec) 00:34:31.307 slat (nsec): min=915, max=23147k, avg=142408.26, stdev=1159198.25 00:34:31.307 clat (usec): min=896, max=78674, avg=18435.08, stdev=15233.50 00:34:31.307 lat (usec): min=4067, max=78680, avg=18577.49, stdev=15334.56 00:34:31.307 clat percentiles (usec): 00:34:31.307 | 1.00th=[ 5145], 5.00th=[ 7635], 10.00th=[ 8848], 20.00th=[ 9634], 00:34:31.307 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11600], 60.00th=[14615], 00:34:31.307 | 70.00th=[19268], 80.00th=[25035], 90.00th=[29230], 95.00th=[58459], 00:34:31.307 | 99.00th=[77071], 99.50th=[78119], 99.90th=[79168], 99.95th=[79168], 00:34:31.307 | 99.99th=[79168] 00:34:31.307 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:34:31.307 slat (nsec): min=1555, max=16911k, avg=139500.65, stdev=907232.37 00:34:31.307 clat (usec): min=834, max=153683, avg=18686.62, stdev=23443.90 00:34:31.307 lat (usec): min=841, max=153691, avg=18826.12, stdev=23569.58 00:34:31.307 clat percentiles (msec): 00:34:31.307 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:34:31.307 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 13], 00:34:31.307 | 70.00th=[ 16], 80.00th=[ 21], 90.00th=[ 32], 95.00th=[ 49], 00:34:31.307 | 99.00th=[ 146], 99.50th=[ 150], 99.90th=[ 155], 99.95th=[ 155], 00:34:31.307 | 99.99th=[ 155] 00:34:31.307 bw ( KiB/s): min=12263, max=16384, per=19.72%, avg=14323.50, stdev=2913.99, samples=2 00:34:31.307 iops : min= 3065, max= 4096, avg=3580.50, stdev=729.03, samples=2 00:34:31.307 lat (usec) : 1000=0.04% 00:34:31.307 lat (msec) : 4=0.09%, 10=30.65%, 20=44.91%, 50=18.99%, 100=3.71% 00:34:31.307 lat (msec) : 250=1.62% 00:34:31.307 cpu : usr=1.99%, sys=2.79%, ctx=377, majf=0, minf=1 00:34:31.307 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:34:31.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:31.307 issued rwts: total=3288,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.307 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:31.307 job2: (groupid=0, jobs=1): err= 0: pid=610581: Thu Dec 5 20:54:24 2024 00:34:31.307 read: IOPS=3996, BW=15.6MiB/s (16.4MB/s)(15.7MiB/1007msec) 00:34:31.307 slat (nsec): min=940, max=20300k, avg=111172.36, stdev=962197.36 00:34:31.307 clat (usec): min=4421, max=40109, avg=16117.39, stdev=5591.53 00:34:31.307 lat (usec): min=4426, max=55586, avg=16228.56, stdev=5695.84 00:34:31.307 clat percentiles (usec): 00:34:31.307 | 1.00th=[ 7242], 5.00th=[ 9110], 10.00th=[10552], 20.00th=[11600], 00:34:31.307 | 30.00th=[12387], 40.00th=[13698], 50.00th=[15008], 60.00th=[15926], 00:34:31.307 | 70.00th=[17957], 80.00th=[20841], 90.00th=[24511], 95.00th=[26870], 00:34:31.307 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35390], 99.95th=[38011], 00:34:31.307 | 99.99th=[40109] 00:34:31.307 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:34:31.307 slat (nsec): min=1809, max=13529k, avg=93668.11, stdev=769147.59 00:34:31.307 clat (usec): min=2032, max=77805, avg=15381.07, stdev=8430.85 00:34:31.307 lat (usec): min=2040, max=77808, avg=15474.74, stdev=8472.84 00:34:31.307 clat percentiles (usec): 00:34:31.307 | 1.00th=[ 2704], 5.00th=[ 7177], 10.00th=[ 7832], 20.00th=[10159], 00:34:31.307 | 30.00th=[11600], 40.00th=[12518], 50.00th=[14484], 60.00th=[15139], 00:34:31.307 | 70.00th=[16450], 80.00th=[17695], 90.00th=[24249], 95.00th=[29754], 00:34:31.307 | 99.00th=[46400], 99.50th=[66847], 99.90th=[78119], 99.95th=[78119], 00:34:31.307 | 99.99th=[78119] 00:34:31.307 bw ( KiB/s): min=14043, max=18696, per=22.54%, avg=16369.50, stdev=3290.17, samples=2 00:34:31.307 iops : min= 3510, max= 4674, avg=4092.00, stdev=823.07, samples=2 00:34:31.307 lat (msec) : 4=1.45%, 10=11.97%, 20=67.64%, 50=18.47%, 100=0.47% 00:34:31.307 cpu : usr=2.49%, sys=4.67%, ctx=274, majf=0, minf=1 00:34:31.307 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:34:31.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:31.307 issued rwts: total=4024,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.307 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:31.307 job3: (groupid=0, jobs=1): err= 0: pid=610582: Thu Dec 5 20:54:24 2024 00:34:31.307 read: IOPS=3930, BW=15.4MiB/s (16.1MB/s)(15.5MiB/1007msec) 00:34:31.307 slat (nsec): min=995, max=24089k, avg=133248.61, stdev=1038257.40 00:34:31.307 clat (usec): min=1682, max=62111, avg=16393.94, stdev=9302.26 00:34:31.307 lat (usec): min=1684, max=62121, avg=16527.19, stdev=9397.37 00:34:31.307 clat percentiles (usec): 00:34:31.307 | 1.00th=[ 4293], 5.00th=[ 7701], 10.00th=[ 8717], 20.00th=[ 9503], 00:34:31.307 | 30.00th=[10421], 40.00th=[11469], 50.00th=[12387], 60.00th=[16057], 00:34:31.307 | 70.00th=[17695], 80.00th=[23725], 90.00th=[31327], 95.00th=[34866], 00:34:31.307 | 99.00th=[46924], 99.50th=[62129], 99.90th=[62129], 99.95th=[62129], 00:34:31.307 | 99.99th=[62129] 00:34:31.307 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:34:31.307 slat (nsec): min=1827, max=24793k, avg=97824.11, stdev=907843.33 00:34:31.307 clat (usec): min=393, max=65256, avg=15311.21, stdev=10332.90 00:34:31.307 lat (usec): min=671, max=65266, avg=15409.03, stdev=10389.50 00:34:31.307 clat percentiles (usec): 00:34:31.307 | 1.00th=[ 1876], 5.00th=[ 4686], 10.00th=[ 5538], 20.00th=[ 8979], 00:34:31.307 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11338], 60.00th=[13304], 00:34:31.307 | 70.00th=[16450], 80.00th=[20579], 90.00th=[28181], 95.00th=[36439], 00:34:31.307 | 99.00th=[56361], 99.50th=[58459], 99.90th=[62129], 99.95th=[65274], 00:34:31.307 | 99.99th=[65274] 00:34:31.307 bw ( KiB/s): min=13144, max=19584, per=22.53%, avg=16364.00, stdev=4553.77, samples=2 00:34:31.307 iops : min= 3286, max= 4896, avg=4091.00, stdev=1138.44, samples=2 00:34:31.307 lat (usec) : 500=0.01%, 750=0.10%, 1000=0.02% 00:34:31.307 lat (msec) : 2=0.63%, 4=1.45%, 10=21.78%, 20=53.00%, 50=21.11% 00:34:31.307 lat (msec) : 100=1.89% 00:34:31.307 cpu : usr=3.38%, sys=5.86%, ctx=248, majf=0, minf=1 00:34:31.307 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:34:31.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:31.307 issued rwts: total=3958,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.307 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:31.307 00:34:31.307 Run status group 0 (all jobs): 00:34:31.307 READ: bw=67.5MiB/s (70.8MB/s), 12.8MiB/s-23.9MiB/s (13.4MB/s-25.1MB/s), io=68.0MiB (71.3MB), run=1004-1007msec 00:34:31.307 WRITE: bw=70.9MiB/s (74.4MB/s), 13.9MiB/s-25.3MiB/s (14.6MB/s-26.6MB/s), io=71.4MiB (74.9MB), run=1004-1007msec 00:34:31.307 00:34:31.307 Disk stats (read/write): 00:34:31.307 nvme0n1: ios=4952/5120, merge=0/0, ticks=41401/42323, in_queue=83724, util=89.28% 00:34:31.307 nvme0n2: ios=2757/3072, merge=0/0, ticks=29176/44105, in_queue=73281, util=85.70% 00:34:31.307 nvme0n3: ios=3129/3149, merge=0/0, ticks=50569/48670, in_queue=99239, util=89.79% 00:34:31.307 nvme0n4: ios=3116/3112, merge=0/0, ticks=42109/37956, in_queue=80065, util=97.46% 00:34:31.307 20:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:31.307 20:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=610856 00:34:31.307 20:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:31.307 20:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:31.307 [global] 00:34:31.307 thread=1 00:34:31.307 invalidate=1 00:34:31.307 rw=read 00:34:31.307 time_based=1 00:34:31.307 runtime=10 00:34:31.307 ioengine=libaio 00:34:31.307 direct=1 00:34:31.307 bs=4096 00:34:31.307 iodepth=1 00:34:31.307 norandommap=1 00:34:31.307 numjobs=1 00:34:31.307 00:34:31.307 [job0] 00:34:31.307 filename=/dev/nvme0n1 00:34:31.307 [job1] 00:34:31.307 filename=/dev/nvme0n2 00:34:31.307 [job2] 00:34:31.307 filename=/dev/nvme0n3 00:34:31.307 [job3] 00:34:31.307 filename=/dev/nvme0n4 00:34:31.307 Could not set queue depth (nvme0n1) 00:34:31.307 Could not set queue depth (nvme0n2) 00:34:31.307 Could not set queue depth (nvme0n3) 00:34:31.307 Could not set queue depth (nvme0n4) 00:34:31.591 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:31.591 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:31.591 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:31.591 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:31.591 fio-3.35 00:34:31.591 Starting 4 threads 00:34:34.884 20:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:34.884 20:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:34.884 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=270336, buflen=4096 00:34:34.884 fio: pid=611013, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:34.884 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=44797952, buflen=4096 00:34:34.884 fio: pid=611012, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:34.884 20:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:34.884 20:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:34.884 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=55558144, buflen=4096 00:34:34.884 fio: pid=611009, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:34.884 20:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:34.884 20:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:35.143 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=7708672, buflen=4096 00:34:35.143 fio: pid=611010, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:35.143 20:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:35.143 20:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:35.143 00:34:35.143 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=611009: Thu Dec 5 20:54:28 2024 00:34:35.143 read: IOPS=4431, BW=17.3MiB/s (18.1MB/s)(53.0MiB/3061msec) 00:34:35.143 slat (usec): min=6, max=26643, avg=10.57, stdev=239.19 00:34:35.143 clat (usec): min=162, max=1948, avg=211.69, stdev=26.25 00:34:35.143 lat (usec): min=181, max=27060, avg=222.26, stdev=242.68 00:34:35.143 clat percentiles (usec): 00:34:35.143 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 198], 00:34:35.143 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 206], 60.00th=[ 210], 00:34:35.143 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 245], 95.00th=[ 251], 00:34:35.143 | 99.00th=[ 262], 99.50th=[ 273], 99.90th=[ 408], 99.95th=[ 416], 00:34:35.143 | 99.99th=[ 807] 00:34:35.143 bw ( KiB/s): min=16048, max=18928, per=55.55%, avg=18027.20, stdev=1154.06, samples=5 00:34:35.143 iops : min= 4012, max= 4732, avg=4506.80, stdev=288.52, samples=5 00:34:35.143 lat (usec) : 250=94.54%, 500=5.43%, 1000=0.01% 00:34:35.143 lat (msec) : 2=0.01% 00:34:35.143 cpu : usr=1.99%, sys=7.45%, ctx=13567, majf=0, minf=2 00:34:35.143 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:35.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.143 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.143 issued rwts: total=13565,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:35.143 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:35.143 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=611010: Thu Dec 5 20:54:28 2024 00:34:35.143 read: IOPS=577, BW=2309KiB/s (2365kB/s)(7528KiB/3260msec) 00:34:35.143 slat (usec): min=6, max=3744, avg=10.59, stdev=86.15 00:34:35.143 clat (usec): min=186, max=42307, avg=1708.27, stdev=7566.95 00:34:35.143 lat (usec): min=196, max=46052, avg=1718.85, stdev=7580.68 00:34:35.143 clat percentiles (usec): 00:34:35.143 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 215], 00:34:35.143 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 251], 00:34:35.143 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 433], 00:34:35.143 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:34:35.143 | 99.99th=[42206] 00:34:35.143 bw ( KiB/s): min= 93, max=14520, per=7.70%, avg=2500.83, stdev=5888.17, samples=6 00:34:35.143 iops : min= 23, max= 3630, avg=625.17, stdev=1472.06, samples=6 00:34:35.143 lat (usec) : 250=58.20%, 500=37.55%, 750=0.21% 00:34:35.143 lat (msec) : 2=0.42%, 50=3.56% 00:34:35.143 cpu : usr=0.28%, sys=1.01%, ctx=1884, majf=0, minf=2 00:34:35.143 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:35.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.143 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.143 issued rwts: total=1883,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:35.143 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:35.143 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=611012: Thu Dec 5 20:54:28 2024 00:34:35.143 read: IOPS=3812, BW=14.9MiB/s (15.6MB/s)(42.7MiB/2869msec) 00:34:35.143 slat (usec): min=7, max=7641, avg= 9.67, stdev=94.73 00:34:35.143 clat (usec): min=201, max=1824, avg=248.79, stdev=27.73 00:34:35.143 lat (usec): min=211, max=8069, avg=258.46, stdev=100.56 00:34:35.143 clat percentiles (usec): 00:34:35.143 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 241], 00:34:35.143 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 249], 00:34:35.143 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 260], 95.00th=[ 265], 00:34:35.143 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 494], 99.95th=[ 515], 00:34:35.143 | 99.99th=[ 1680] 00:34:35.143 bw ( KiB/s): min=15496, max=15616, per=47.84%, avg=15526.40, stdev=50.72, samples=5 00:34:35.143 iops : min= 3874, max= 3904, avg=3881.60, stdev=12.68, samples=5 00:34:35.143 lat (usec) : 250=62.65%, 500=37.26%, 750=0.05% 00:34:35.143 lat (msec) : 2=0.03% 00:34:35.143 cpu : usr=2.16%, sys=6.17%, ctx=10940, majf=0, minf=2 00:34:35.143 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:35.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.143 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.143 issued rwts: total=10938,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:35.143 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:35.143 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=611013: Thu Dec 5 20:54:28 2024 00:34:35.143 read: IOPS=24, BW=98.0KiB/s (100kB/s)(264KiB/2693msec) 00:34:35.143 slat (nsec): min=12474, max=38608, avg=24413.51, stdev=2368.42 00:34:35.143 clat (usec): min=387, max=42032, avg=40447.12, stdev=5015.62 00:34:35.143 lat (usec): min=426, max=42056, avg=40471.53, stdev=5013.85 00:34:35.143 clat percentiles (usec): 00:34:35.143 | 1.00th=[ 388], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:35.143 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:35.143 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:34:35.143 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:35.143 | 99.99th=[42206] 00:34:35.143 bw ( KiB/s): min= 96, max= 104, per=0.30%, avg=97.60, stdev= 3.58, samples=5 00:34:35.143 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:34:35.143 lat (usec) : 500=1.49% 00:34:35.143 lat (msec) : 50=97.01% 00:34:35.143 cpu : usr=0.11%, sys=0.00%, ctx=67, majf=0, minf=1 00:34:35.143 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:35.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.143 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.143 issued rwts: total=67,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:35.143 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:35.143 00:34:35.143 Run status group 0 (all jobs): 00:34:35.143 READ: bw=31.7MiB/s (33.2MB/s), 98.0KiB/s-17.3MiB/s (100kB/s-18.1MB/s), io=103MiB (108MB), run=2693-3260msec 00:34:35.143 00:34:35.143 Disk stats (read/write): 00:34:35.143 nvme0n1: ios=12640/0, merge=0/0, ticks=2542/0, in_queue=2542, util=94.26% 00:34:35.143 nvme0n2: ios=1878/0, merge=0/0, ticks=3040/0, in_queue=3040, util=96.01% 00:34:35.143 nvme0n3: ios=10937/0, merge=0/0, ticks=2596/0, in_queue=2596, util=96.11% 00:34:35.143 nvme0n4: ios=64/0, merge=0/0, ticks=2589/0, in_queue=2589, util=96.48% 00:34:35.143 20:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:35.143 20:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:35.401 20:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:35.401 20:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:35.659 20:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:35.659 20:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:35.918 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:35.918 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:36.176 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:36.176 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 610856 00:34:36.176 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:36.176 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:36.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:36.176 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:36.177 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:34:36.177 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:36.177 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:36.177 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:36.177 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:36.177 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:34:36.177 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:36.177 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:36.177 nvmf hotplug test: fio failed as expected 00:34:36.177 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:36.437 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:36.437 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:36.437 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:36.437 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:36.437 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:36.437 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:36.437 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:36.437 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:36.437 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:36.437 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:36.437 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:36.437 rmmod nvme_tcp 00:34:36.437 rmmod nvme_fabrics 00:34:36.437 rmmod nvme_keyring 00:34:36.437 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:36.437 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:36.437 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:36.437 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 607970 ']' 00:34:36.437 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 607970 00:34:36.437 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 607970 ']' 00:34:36.437 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 607970 00:34:36.437 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:34:36.437 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:36.437 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 607970 00:34:36.437 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:36.437 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:36.437 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 607970' 00:34:36.437 killing process with pid 607970 00:34:36.437 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 607970 00:34:36.437 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 607970 00:34:36.697 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:36.697 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:36.697 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:36.697 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:36.697 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:34:36.697 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:36.697 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:36.697 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:36.697 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:36.697 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:36.697 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:36.697 20:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:38.607 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:38.607 00:34:38.607 real 0m26.065s 00:34:38.607 user 1m42.004s 00:34:38.607 sys 0m11.488s 00:34:38.607 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:38.607 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:38.607 ************************************ 00:34:38.607 END TEST nvmf_fio_target 00:34:38.607 ************************************ 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:38.867 ************************************ 00:34:38.867 START TEST nvmf_bdevio 00:34:38.867 ************************************ 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:38.867 * Looking for test storage... 00:34:38.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:38.867 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:38.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.868 --rc genhtml_branch_coverage=1 00:34:38.868 --rc genhtml_function_coverage=1 00:34:38.868 --rc genhtml_legend=1 00:34:38.868 --rc geninfo_all_blocks=1 00:34:38.868 --rc geninfo_unexecuted_blocks=1 00:34:38.868 00:34:38.868 ' 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:38.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.868 --rc genhtml_branch_coverage=1 00:34:38.868 --rc genhtml_function_coverage=1 00:34:38.868 --rc genhtml_legend=1 00:34:38.868 --rc geninfo_all_blocks=1 00:34:38.868 --rc geninfo_unexecuted_blocks=1 00:34:38.868 00:34:38.868 ' 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:38.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.868 --rc genhtml_branch_coverage=1 00:34:38.868 --rc genhtml_function_coverage=1 00:34:38.868 --rc genhtml_legend=1 00:34:38.868 --rc geninfo_all_blocks=1 00:34:38.868 --rc geninfo_unexecuted_blocks=1 00:34:38.868 00:34:38.868 ' 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:38.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.868 --rc genhtml_branch_coverage=1 00:34:38.868 --rc genhtml_function_coverage=1 00:34:38.868 --rc genhtml_legend=1 00:34:38.868 --rc geninfo_all_blocks=1 00:34:38.868 --rc geninfo_unexecuted_blocks=1 00:34:38.868 00:34:38.868 ' 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.868 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.128 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:39.128 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.128 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:39.128 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:39.128 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:39.128 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:39.128 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:39.128 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:39.128 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:39.128 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:39.128 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:39.128 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:39.128 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:39.128 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:39.128 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:39.128 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:39.128 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:39.128 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:39.128 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:39.128 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:39.128 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:39.128 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:39.128 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:39.128 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:39.128 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:39.128 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:39.128 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:34:39.128 20:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:45.706 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:45.706 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:45.706 Found net devices under 0000:af:00.0: cvl_0_0 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:45.706 Found net devices under 0000:af:00.1: cvl_0_1 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:45.706 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:45.707 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:45.707 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:45.707 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:45.707 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:45.707 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:45.707 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:45.707 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:45.707 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:45.707 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:45.707 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:45.707 20:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:45.707 20:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:45.707 20:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:45.707 20:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:45.707 20:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:45.707 20:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:45.707 20:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:45.707 20:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:45.707 20:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:45.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:45.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:34:45.707 00:34:45.707 --- 10.0.0.2 ping statistics --- 00:34:45.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:45.707 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:34:45.707 20:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:45.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:45.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:34:45.707 00:34:45.707 --- 10.0.0.1 ping statistics --- 00:34:45.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:45.707 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:34:45.707 20:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:45.707 20:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:34:45.707 20:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:45.707 20:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:45.707 20:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:45.707 20:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:45.707 20:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:45.707 20:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:45.707 20:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:45.707 20:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:45.707 20:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:45.707 20:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:45.707 20:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:45.707 20:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=615446 00:34:45.707 20:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 615446 00:34:45.707 20:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:34:45.707 20:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 615446 ']' 00:34:45.707 20:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:45.707 20:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:45.707 20:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:45.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:45.707 20:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:45.707 20:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:45.707 [2024-12-05 20:54:38.228831] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:45.707 [2024-12-05 20:54:38.229748] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:34:45.707 [2024-12-05 20:54:38.229786] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:45.707 [2024-12-05 20:54:38.306361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:45.707 [2024-12-05 20:54:38.345442] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:45.707 [2024-12-05 20:54:38.345478] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:45.707 [2024-12-05 20:54:38.345485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:45.707 [2024-12-05 20:54:38.345490] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:45.707 [2024-12-05 20:54:38.345495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:45.707 [2024-12-05 20:54:38.347158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:45.707 [2024-12-05 20:54:38.347276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:45.707 [2024-12-05 20:54:38.347363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:45.707 [2024-12-05 20:54:38.347364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:45.707 [2024-12-05 20:54:38.414256] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:45.707 [2024-12-05 20:54:38.414957] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:45.707 [2024-12-05 20:54:38.415139] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:45.707 [2024-12-05 20:54:38.415372] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:45.707 [2024-12-05 20:54:38.415425] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:45.707 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:45.707 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:34:45.707 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:45.707 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:45.707 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:45.707 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:45.707 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:45.707 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.707 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:45.707 [2024-12-05 20:54:39.084172] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:45.707 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.707 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:45.707 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.707 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:45.707 Malloc0 00:34:45.707 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.707 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:45.707 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.707 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:45.967 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.967 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:45.967 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.967 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:45.967 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.967 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:45.967 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.967 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:45.967 [2024-12-05 20:54:39.160351] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:45.967 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.967 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:34:45.967 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:45.967 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:34:45.967 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:34:45.967 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:45.967 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:45.967 { 00:34:45.967 "params": { 00:34:45.967 "name": "Nvme$subsystem", 00:34:45.967 "trtype": "$TEST_TRANSPORT", 00:34:45.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:45.967 "adrfam": "ipv4", 00:34:45.967 "trsvcid": "$NVMF_PORT", 00:34:45.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:45.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:45.967 "hdgst": ${hdgst:-false}, 00:34:45.967 "ddgst": ${ddgst:-false} 00:34:45.967 }, 00:34:45.967 "method": "bdev_nvme_attach_controller" 00:34:45.967 } 00:34:45.967 EOF 00:34:45.967 )") 00:34:45.967 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:34:45.967 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:34:45.967 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:34:45.967 20:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:45.967 "params": { 00:34:45.967 "name": "Nvme1", 00:34:45.967 "trtype": "tcp", 00:34:45.967 "traddr": "10.0.0.2", 00:34:45.967 "adrfam": "ipv4", 00:34:45.967 "trsvcid": "4420", 00:34:45.967 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:45.967 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:45.967 "hdgst": false, 00:34:45.967 "ddgst": false 00:34:45.967 }, 00:34:45.967 "method": "bdev_nvme_attach_controller" 00:34:45.967 }' 00:34:45.967 [2024-12-05 20:54:39.210657] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:34:45.967 [2024-12-05 20:54:39.210700] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid615585 ] 00:34:45.967 [2024-12-05 20:54:39.281867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:45.967 [2024-12-05 20:54:39.322684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:45.967 [2024-12-05 20:54:39.322797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:45.967 [2024-12-05 20:54:39.322798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:46.224 I/O targets: 00:34:46.224 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:46.224 00:34:46.224 00:34:46.224 CUnit - A unit testing framework for C - Version 2.1-3 00:34:46.224 http://cunit.sourceforge.net/ 00:34:46.224 00:34:46.224 00:34:46.224 Suite: bdevio tests on: Nvme1n1 00:34:46.224 Test: blockdev write read block ...passed 00:34:46.224 Test: blockdev write zeroes read block ...passed 00:34:46.224 Test: blockdev write zeroes read no split ...passed 00:34:46.224 Test: blockdev write zeroes read split ...passed 00:34:46.485 Test: blockdev write zeroes read split partial ...passed 00:34:46.485 Test: blockdev reset ...[2024-12-05 20:54:39.702458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:46.485 [2024-12-05 20:54:39.702520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2138400 (9): Bad file descriptor 00:34:46.485 [2024-12-05 20:54:39.746800] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:34:46.485 passed 00:34:46.485 Test: blockdev write read 8 blocks ...passed 00:34:46.485 Test: blockdev write read size > 128k ...passed 00:34:46.485 Test: blockdev write read invalid size ...passed 00:34:46.485 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:46.485 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:46.485 Test: blockdev write read max offset ...passed 00:34:46.485 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:46.746 Test: blockdev writev readv 8 blocks ...passed 00:34:46.746 Test: blockdev writev readv 30 x 1block ...passed 00:34:46.746 Test: blockdev writev readv block ...passed 00:34:46.746 Test: blockdev writev readv size > 128k ...passed 00:34:46.746 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:46.746 Test: blockdev comparev and writev ...[2024-12-05 20:54:39.997641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:46.746 [2024-12-05 20:54:39.997668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:46.746 [2024-12-05 20:54:39.997682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:46.746 [2024-12-05 20:54:39.997693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:46.746 [2024-12-05 20:54:39.997965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:46.746 [2024-12-05 20:54:39.997975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:46.746 [2024-12-05 20:54:39.997986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:46.746 [2024-12-05 20:54:39.997992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:46.746 [2024-12-05 20:54:39.998256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:46.746 [2024-12-05 20:54:39.998267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:46.746 [2024-12-05 20:54:39.998277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:46.746 [2024-12-05 20:54:39.998283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:46.746 [2024-12-05 20:54:39.998544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:46.746 [2024-12-05 20:54:39.998553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:46.746 [2024-12-05 20:54:39.998564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:46.746 [2024-12-05 20:54:39.998570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:46.746 passed 00:34:46.746 Test: blockdev nvme passthru rw ...passed 00:34:46.746 Test: blockdev nvme passthru vendor specific ...[2024-12-05 20:54:40.080442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:46.746 [2024-12-05 20:54:40.080473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:46.746 [2024-12-05 20:54:40.080583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:46.746 [2024-12-05 20:54:40.080593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:46.746 [2024-12-05 20:54:40.080695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:46.746 [2024-12-05 20:54:40.080704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:46.746 [2024-12-05 20:54:40.080804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:46.746 [2024-12-05 20:54:40.080813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:46.746 passed 00:34:46.746 Test: blockdev nvme admin passthru ...passed 00:34:46.746 Test: blockdev copy ...passed 00:34:46.746 00:34:46.746 Run Summary: Type Total Ran Passed Failed Inactive 00:34:46.746 suites 1 1 n/a 0 0 00:34:46.746 tests 23 23 23 0 0 00:34:46.746 asserts 152 152 152 0 n/a 00:34:46.746 00:34:46.746 Elapsed time = 1.251 seconds 00:34:47.004 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:47.004 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.004 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:47.004 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.004 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:47.004 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:34:47.004 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:47.004 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:34:47.004 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:47.004 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:34:47.004 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:47.004 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:47.004 rmmod nvme_tcp 00:34:47.004 rmmod nvme_fabrics 00:34:47.004 rmmod nvme_keyring 00:34:47.004 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:47.004 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:34:47.004 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:34:47.004 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 615446 ']' 00:34:47.004 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 615446 00:34:47.004 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 615446 ']' 00:34:47.004 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 615446 00:34:47.004 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:34:47.004 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:47.004 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 615446 00:34:47.004 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:34:47.004 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:34:47.004 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 615446' 00:34:47.004 killing process with pid 615446 00:34:47.004 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 615446 00:34:47.004 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 615446 00:34:47.264 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:47.264 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:47.264 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:47.264 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:34:47.264 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:34:47.264 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:47.264 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:34:47.264 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:47.264 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:47.264 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:47.264 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:47.264 20:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:49.799 20:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:49.799 00:34:49.799 real 0m10.544s 00:34:49.799 user 0m9.194s 00:34:49.799 sys 0m5.140s 00:34:49.799 20:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:49.799 20:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:49.799 ************************************ 00:34:49.799 END TEST nvmf_bdevio 00:34:49.799 ************************************ 00:34:49.799 20:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:34:49.799 00:34:49.799 real 4m35.663s 00:34:49.799 user 9m19.211s 00:34:49.799 sys 1m51.704s 00:34:49.799 20:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:49.800 20:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:49.800 ************************************ 00:34:49.800 END TEST nvmf_target_core_interrupt_mode 00:34:49.800 ************************************ 00:34:49.800 20:54:42 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:49.800 20:54:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:49.800 20:54:42 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:49.800 20:54:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:49.800 ************************************ 00:34:49.800 START TEST nvmf_interrupt 00:34:49.800 ************************************ 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:49.800 * Looking for test storage... 00:34:49.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:49.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.800 --rc genhtml_branch_coverage=1 00:34:49.800 --rc genhtml_function_coverage=1 00:34:49.800 --rc genhtml_legend=1 00:34:49.800 --rc geninfo_all_blocks=1 00:34:49.800 --rc geninfo_unexecuted_blocks=1 00:34:49.800 00:34:49.800 ' 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:49.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.800 --rc genhtml_branch_coverage=1 00:34:49.800 --rc genhtml_function_coverage=1 00:34:49.800 --rc genhtml_legend=1 00:34:49.800 --rc geninfo_all_blocks=1 00:34:49.800 --rc geninfo_unexecuted_blocks=1 00:34:49.800 00:34:49.800 ' 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:49.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.800 --rc genhtml_branch_coverage=1 00:34:49.800 --rc genhtml_function_coverage=1 00:34:49.800 --rc genhtml_legend=1 00:34:49.800 --rc geninfo_all_blocks=1 00:34:49.800 --rc geninfo_unexecuted_blocks=1 00:34:49.800 00:34:49.800 ' 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:49.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.800 --rc genhtml_branch_coverage=1 00:34:49.800 --rc genhtml_function_coverage=1 00:34:49.800 --rc genhtml_legend=1 00:34:49.800 --rc geninfo_all_blocks=1 00:34:49.800 --rc geninfo_unexecuted_blocks=1 00:34:49.800 00:34:49.800 ' 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:49.800 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:49.801 20:54:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:34:49.801 20:54:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:49.801 20:54:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:34:49.801 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:49.801 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:49.801 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:49.801 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:49.801 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:49.801 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:49.801 20:54:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:49.801 20:54:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:49.801 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:49.801 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:49.801 20:54:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:34:49.801 20:54:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:56.370 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:56.370 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:56.371 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:56.371 Found net devices under 0000:af:00.0: cvl_0_0 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:56.371 Found net devices under 0000:af:00.1: cvl_0_1 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:56.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:56.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:34:56.371 00:34:56.371 --- 10.0.0.2 ping statistics --- 00:34:56.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:56.371 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:56.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:56.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:34:56.371 00:34:56.371 --- 10.0.0.1 ping statistics --- 00:34:56.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:56.371 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=619448 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 619448 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 619448 ']' 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:56.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:56.371 20:54:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:56.371 [2024-12-05 20:54:48.965239] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:56.371 [2024-12-05 20:54:48.966051] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:34:56.371 [2024-12-05 20:54:48.966095] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:56.371 [2024-12-05 20:54:49.027347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:56.371 [2024-12-05 20:54:49.069281] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:56.371 [2024-12-05 20:54:49.069316] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:56.371 [2024-12-05 20:54:49.069323] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:56.371 [2024-12-05 20:54:49.069329] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:56.371 [2024-12-05 20:54:49.069334] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:56.371 [2024-12-05 20:54:49.074076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:56.371 [2024-12-05 20:54:49.074080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:56.371 [2024-12-05 20:54:49.141939] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:56.371 [2024-12-05 20:54:49.142417] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:56.371 [2024-12-05 20:54:49.142463] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:56.371 20:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:56.371 20:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:34:56.371 20:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:56.371 20:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:56.371 20:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:56.371 20:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:56.371 20:54:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:34:56.371 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:34:56.372 5000+0 records in 00:34:56.372 5000+0 records out 00:34:56.372 10240000 bytes (10 MB, 9.8 MiB) copied, 0.018621 s, 550 MB/s 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:56.372 AIO0 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:56.372 [2024-12-05 20:54:49.278757] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:56.372 [2024-12-05 20:54:49.310999] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 619448 0 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 619448 0 idle 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=619448 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 619448 -w 256 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 619448 root 20 0 128.2g 44800 33152 S 0.0 0.0 0:00.22 reactor_0' 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 619448 root 20 0 128.2g 44800 33152 S 0.0 0.0 0:00.22 reactor_0 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 619448 1 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 619448 1 idle 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=619448 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 619448 -w 256 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 619486 root 20 0 128.2g 44800 33152 S 0.0 0.0 0:00.00 reactor_1' 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 619486 root 20 0 128.2g 44800 33152 S 0.0 0.0 0:00.00 reactor_1 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=619626 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 619448 0 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 619448 0 busy 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=619448 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 619448 -w 256 00:34:56.372 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:56.631 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 619448 root 20 0 128.2g 45696 33152 R 99.9 0.0 0:00.44 reactor_0' 00:34:56.631 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 619448 root 20 0 128.2g 45696 33152 R 99.9 0.0 0:00.44 reactor_0 00:34:56.631 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:56.631 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:56.631 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:56.631 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:56.631 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:56.631 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:56.632 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:56.632 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:56.632 20:54:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:56.632 20:54:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:56.632 20:54:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 619448 1 00:34:56.632 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 619448 1 busy 00:34:56.632 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=619448 00:34:56.632 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:56.632 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:56.632 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:56.632 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:56.632 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:56.632 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:56.632 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:56.632 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:56.632 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 619448 -w 256 00:34:56.632 20:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:56.632 20:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 619486 root 20 0 128.2g 45696 33152 R 99.9 0.0 0:00.29 reactor_1' 00:34:56.632 20:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 619486 root 20 0 128.2g 45696 33152 R 99.9 0.0 0:00.29 reactor_1 00:34:56.632 20:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:56.632 20:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:56.890 20:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:56.890 20:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:56.890 20:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:56.890 20:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:56.890 20:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:56.890 20:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:56.890 20:54:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 619626 00:35:06.864 Initializing NVMe Controllers 00:35:06.864 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:06.864 Controller IO queue size 256, less than required. 00:35:06.864 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:06.864 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:06.864 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:06.864 Initialization complete. Launching workers. 00:35:06.864 ======================================================== 00:35:06.864 Latency(us) 00:35:06.864 Device Information : IOPS MiB/s Average min max 00:35:06.864 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 18096.60 70.69 14153.08 2724.51 54068.55 00:35:06.864 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 18270.70 71.37 14016.91 4047.11 28190.48 00:35:06.864 ======================================================== 00:35:06.864 Total : 36367.30 142.06 14084.67 2724.51 54068.55 00:35:06.864 00:35:06.864 20:54:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:06.864 20:54:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 619448 0 00:35:06.864 20:54:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 619448 0 idle 00:35:06.864 20:54:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=619448 00:35:06.864 20:54:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:06.864 20:54:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:06.864 20:54:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:06.864 20:54:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:06.864 20:54:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:06.864 20:54:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:06.864 20:54:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:06.864 20:54:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:06.864 20:54:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:06.865 20:54:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 619448 -w 256 00:35:06.865 20:54:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 619448 root 20 0 128.2g 45696 33152 S 0.0 0.0 0:20.22 reactor_0' 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 619448 root 20 0 128.2g 45696 33152 S 0.0 0.0 0:20.22 reactor_0 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 619448 1 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 619448 1 idle 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=619448 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 619448 -w 256 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 619486 root 20 0 128.2g 45696 33152 S 0.0 0.0 0:10.00 reactor_1' 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 619486 root 20 0 128.2g 45696 33152 S 0.0 0.0 0:10.00 reactor_1 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:06.865 20:55:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:07.435 20:55:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:35:07.435 20:55:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:35:07.435 20:55:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:07.435 20:55:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:35:07.435 20:55:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:35:09.340 20:55:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:09.340 20:55:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:09.340 20:55:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:09.340 20:55:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:09.340 20:55:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:09.340 20:55:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:35:09.340 20:55:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:09.340 20:55:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 619448 0 00:35:09.340 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 619448 0 idle 00:35:09.340 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=619448 00:35:09.340 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:09.340 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:09.340 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:09.340 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:09.340 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:09.340 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:09.340 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:09.340 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:09.340 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:09.340 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 619448 -w 256 00:35:09.340 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:09.600 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 619448 root 20 0 128.2g 76160 33152 S 6.2 0.1 0:20.54 reactor_0' 00:35:09.600 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 619448 root 20 0 128.2g 76160 33152 S 6.2 0.1 0:20.54 reactor_0 00:35:09.600 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:09.600 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:09.600 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:35:09.600 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:35:09.600 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:09.600 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:09.600 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:09.600 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:09.600 20:55:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:09.600 20:55:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 619448 1 00:35:09.600 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 619448 1 idle 00:35:09.600 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=619448 00:35:09.600 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:09.600 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:09.600 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:09.600 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:09.600 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:09.600 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:09.600 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:09.600 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:09.600 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:09.600 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 619448 -w 256 00:35:09.600 20:55:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:09.860 20:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 619486 root 20 0 128.2g 76160 33152 S 0.0 0.1 0:10.12 reactor_1' 00:35:09.860 20:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 619486 root 20 0 128.2g 76160 33152 S 0.0 0.1 0:10.12 reactor_1 00:35:09.860 20:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:09.860 20:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:09.860 20:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:09.860 20:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:09.860 20:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:09.860 20:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:09.860 20:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:09.860 20:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:09.860 20:55:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:09.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:09.860 20:55:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:09.860 20:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:35:09.860 20:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:09.860 20:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:09.860 20:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:09.860 20:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:09.860 20:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:35:09.860 20:55:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:09.860 20:55:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:35:09.860 20:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:09.860 20:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:35:09.860 20:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:09.860 20:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:35:09.860 20:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:09.860 20:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:09.860 rmmod nvme_tcp 00:35:09.860 rmmod nvme_fabrics 00:35:10.118 rmmod nvme_keyring 00:35:10.118 20:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:10.118 20:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:35:10.118 20:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:35:10.118 20:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 619448 ']' 00:35:10.118 20:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 619448 00:35:10.118 20:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 619448 ']' 00:35:10.118 20:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 619448 00:35:10.118 20:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:35:10.118 20:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:10.118 20:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 619448 00:35:10.118 20:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:10.118 20:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:10.118 20:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 619448' 00:35:10.118 killing process with pid 619448 00:35:10.118 20:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 619448 00:35:10.118 20:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 619448 00:35:10.377 20:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:10.377 20:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:10.377 20:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:10.377 20:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:35:10.377 20:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:35:10.377 20:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:10.377 20:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:35:10.377 20:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:10.377 20:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:10.377 20:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:10.377 20:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:10.377 20:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:12.276 20:55:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:12.276 00:35:12.276 real 0m22.911s 00:35:12.276 user 0m39.781s 00:35:12.276 sys 0m8.404s 00:35:12.276 20:55:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:12.276 20:55:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:12.276 ************************************ 00:35:12.276 END TEST nvmf_interrupt 00:35:12.276 ************************************ 00:35:12.276 00:35:12.276 real 27m43.197s 00:35:12.276 user 58m5.610s 00:35:12.276 sys 9m17.169s 00:35:12.276 20:55:05 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:12.276 20:55:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:12.276 ************************************ 00:35:12.276 END TEST nvmf_tcp 00:35:12.276 ************************************ 00:35:12.535 20:55:05 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:35:12.535 20:55:05 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:12.535 20:55:05 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:12.535 20:55:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:12.535 20:55:05 -- common/autotest_common.sh@10 -- # set +x 00:35:12.535 ************************************ 00:35:12.535 START TEST spdkcli_nvmf_tcp 00:35:12.535 ************************************ 00:35:12.535 20:55:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:12.535 * Looking for test storage... 00:35:12.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:35:12.535 20:55:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:12.535 20:55:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:35:12.535 20:55:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:12.535 20:55:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:12.535 20:55:05 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:12.535 20:55:05 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:12.535 20:55:05 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:12.535 20:55:05 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:35:12.535 20:55:05 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:35:12.535 20:55:05 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:35:12.535 20:55:05 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:35:12.535 20:55:05 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:35:12.535 20:55:05 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:35:12.535 20:55:05 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:35:12.535 20:55:05 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:12.535 20:55:05 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:35:12.535 20:55:05 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:35:12.535 20:55:05 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:12.535 20:55:05 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:12.535 20:55:05 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:35:12.535 20:55:05 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:35:12.535 20:55:05 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:12.535 20:55:05 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:35:12.535 20:55:05 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:35:12.535 20:55:05 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:35:12.535 20:55:05 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:35:12.536 20:55:05 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:12.536 20:55:05 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:12.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:12.839 --rc genhtml_branch_coverage=1 00:35:12.839 --rc genhtml_function_coverage=1 00:35:12.839 --rc genhtml_legend=1 00:35:12.839 --rc geninfo_all_blocks=1 00:35:12.839 --rc geninfo_unexecuted_blocks=1 00:35:12.839 00:35:12.839 ' 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:12.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:12.839 --rc genhtml_branch_coverage=1 00:35:12.839 --rc genhtml_function_coverage=1 00:35:12.839 --rc genhtml_legend=1 00:35:12.839 --rc geninfo_all_blocks=1 00:35:12.839 --rc geninfo_unexecuted_blocks=1 00:35:12.839 00:35:12.839 ' 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:12.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:12.839 --rc genhtml_branch_coverage=1 00:35:12.839 --rc genhtml_function_coverage=1 00:35:12.839 --rc genhtml_legend=1 00:35:12.839 --rc geninfo_all_blocks=1 00:35:12.839 --rc geninfo_unexecuted_blocks=1 00:35:12.839 00:35:12.839 ' 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:12.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:12.839 --rc genhtml_branch_coverage=1 00:35:12.839 --rc genhtml_function_coverage=1 00:35:12.839 --rc genhtml_legend=1 00:35:12.839 --rc geninfo_all_blocks=1 00:35:12.839 --rc geninfo_unexecuted_blocks=1 00:35:12.839 00:35:12.839 ' 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:12.839 20:55:05 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:35:12.839 20:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:12.839 20:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:12.839 20:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:12.839 20:55:06 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.839 20:55:06 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.839 20:55:06 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.839 20:55:06 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:12.839 20:55:06 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.840 20:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:35:12.840 20:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:12.840 20:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:12.840 20:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:12.840 20:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:12.840 20:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:12.840 20:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:12.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:12.840 20:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:12.840 20:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:12.840 20:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:12.840 20:55:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:12.840 20:55:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:12.840 20:55:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:12.840 20:55:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:12.840 20:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:12.840 20:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:12.840 20:55:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:12.840 20:55:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=622609 00:35:12.840 20:55:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 622609 00:35:12.840 20:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 622609 ']' 00:35:12.840 20:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:12.840 20:55:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:12.840 20:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:12.840 20:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:12.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:12.840 20:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:12.840 20:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:12.840 [2024-12-05 20:55:06.066832] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:35:12.840 [2024-12-05 20:55:06.066880] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid622609 ] 00:35:12.840 [2024-12-05 20:55:06.139811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:12.840 [2024-12-05 20:55:06.180273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:12.840 [2024-12-05 20:55:06.180276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:13.773 20:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:13.773 20:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:35:13.773 20:55:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:13.773 20:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:13.773 20:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:13.773 20:55:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:13.773 20:55:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:13.773 20:55:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:13.773 20:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:13.773 20:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:13.773 20:55:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:13.773 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:13.773 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:13.773 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:13.773 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:13.773 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:13.773 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:13.773 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:13.773 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:13.773 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:13.773 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:13.773 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:13.773 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:13.773 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:13.773 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:13.773 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:13.773 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:13.773 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:13.773 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:13.773 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:13.773 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:13.773 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:13.773 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:13.773 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:13.773 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:13.773 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:13.773 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:13.773 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:13.773 ' 00:35:16.301 [2024-12-05 20:55:09.632499] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:17.678 [2024-12-05 20:55:10.973091] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:20.206 [2024-12-05 20:55:13.456630] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:22.742 [2024-12-05 20:55:15.623365] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:23.854 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:23.854 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:23.854 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:23.854 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:23.854 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:23.854 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:23.854 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:23.854 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:23.854 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:23.854 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:23.854 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:23.854 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:23.854 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:23.854 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:23.854 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:23.854 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:23.854 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:23.854 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:23.854 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:23.854 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:23.854 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:23.854 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:23.854 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:23.854 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:23.854 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:23.854 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:23.854 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:23.854 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:24.113 20:55:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:24.113 20:55:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:24.113 20:55:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:24.113 20:55:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:24.113 20:55:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:24.113 20:55:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:24.113 20:55:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:24.113 20:55:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:24.680 20:55:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:24.680 20:55:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:24.680 20:55:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:24.680 20:55:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:24.680 20:55:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:24.680 20:55:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:24.680 20:55:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:24.680 20:55:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:24.680 20:55:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:24.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:24.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:24.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:24.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:24.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:24.680 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:24.680 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:24.680 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:24.680 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:24.680 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:24.680 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:24.680 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:24.680 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:24.680 ' 00:35:31.246 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:31.246 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:31.246 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:31.246 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:31.246 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:31.246 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:31.246 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:31.246 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:31.246 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:31.246 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:31.246 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:31.246 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:31.246 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:31.246 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:31.246 20:55:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:31.246 20:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:31.246 20:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:31.246 20:55:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 622609 00:35:31.246 20:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 622609 ']' 00:35:31.246 20:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 622609 00:35:31.246 20:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:35:31.246 20:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:31.246 20:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 622609 00:35:31.246 20:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:31.246 20:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:31.246 20:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 622609' 00:35:31.246 killing process with pid 622609 00:35:31.246 20:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 622609 00:35:31.246 20:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 622609 00:35:31.246 20:55:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:31.246 20:55:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:31.246 20:55:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 622609 ']' 00:35:31.246 20:55:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 622609 00:35:31.246 20:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 622609 ']' 00:35:31.246 20:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 622609 00:35:31.246 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (622609) - No such process 00:35:31.246 20:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 622609 is not found' 00:35:31.246 Process with pid 622609 is not found 00:35:31.246 20:55:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:31.246 20:55:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:31.246 20:55:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:31.246 00:35:31.246 real 0m17.962s 00:35:31.246 user 0m39.537s 00:35:31.246 sys 0m0.872s 00:35:31.246 20:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:31.246 20:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:31.246 ************************************ 00:35:31.246 END TEST spdkcli_nvmf_tcp 00:35:31.246 ************************************ 00:35:31.246 20:55:23 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:31.246 20:55:23 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:31.246 20:55:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:31.246 20:55:23 -- common/autotest_common.sh@10 -- # set +x 00:35:31.246 ************************************ 00:35:31.246 START TEST nvmf_identify_passthru 00:35:31.246 ************************************ 00:35:31.246 20:55:23 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:31.246 * Looking for test storage... 00:35:31.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:31.246 20:55:23 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:31.246 20:55:23 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:35:31.246 20:55:23 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:31.246 20:55:23 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:31.246 20:55:23 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:31.246 20:55:23 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:31.246 20:55:23 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:31.246 20:55:23 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:31.246 20:55:23 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:31.246 20:55:23 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:31.246 20:55:23 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:31.246 20:55:23 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:31.246 20:55:23 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:31.246 20:55:23 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:31.246 20:55:23 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:31.246 20:55:23 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:31.246 20:55:23 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:31.246 20:55:23 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:31.246 20:55:23 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:31.246 20:55:23 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:31.246 20:55:23 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:31.246 20:55:23 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:31.246 20:55:23 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:31.246 20:55:23 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:31.246 20:55:23 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:31.246 20:55:23 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:31.246 20:55:23 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:31.246 20:55:23 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:31.246 20:55:23 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:31.246 20:55:23 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:31.246 20:55:23 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:31.246 20:55:23 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:31.246 20:55:23 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:31.246 20:55:23 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:31.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.246 --rc genhtml_branch_coverage=1 00:35:31.246 --rc genhtml_function_coverage=1 00:35:31.246 --rc genhtml_legend=1 00:35:31.246 --rc geninfo_all_blocks=1 00:35:31.246 --rc geninfo_unexecuted_blocks=1 00:35:31.246 00:35:31.246 ' 00:35:31.246 20:55:23 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:31.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.246 --rc genhtml_branch_coverage=1 00:35:31.246 --rc genhtml_function_coverage=1 00:35:31.246 --rc genhtml_legend=1 00:35:31.246 --rc geninfo_all_blocks=1 00:35:31.246 --rc geninfo_unexecuted_blocks=1 00:35:31.246 00:35:31.246 ' 00:35:31.247 20:55:23 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:31.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.247 --rc genhtml_branch_coverage=1 00:35:31.247 --rc genhtml_function_coverage=1 00:35:31.247 --rc genhtml_legend=1 00:35:31.247 --rc geninfo_all_blocks=1 00:35:31.247 --rc geninfo_unexecuted_blocks=1 00:35:31.247 00:35:31.247 ' 00:35:31.247 20:55:23 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:31.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.247 --rc genhtml_branch_coverage=1 00:35:31.247 --rc genhtml_function_coverage=1 00:35:31.247 --rc genhtml_legend=1 00:35:31.247 --rc geninfo_all_blocks=1 00:35:31.247 --rc geninfo_unexecuted_blocks=1 00:35:31.247 00:35:31.247 ' 00:35:31.247 20:55:23 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:31.247 20:55:23 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:31.247 20:55:23 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:31.247 20:55:23 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:31.247 20:55:24 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:31.247 20:55:24 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:31.247 20:55:24 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:31.247 20:55:24 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:31.247 20:55:24 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.247 20:55:24 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.247 20:55:24 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.247 20:55:24 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:31.247 20:55:24 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:31.247 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:31.247 20:55:24 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:31.247 20:55:24 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:31.247 20:55:24 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:31.247 20:55:24 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:31.247 20:55:24 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:31.247 20:55:24 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.247 20:55:24 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.247 20:55:24 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.247 20:55:24 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:31.247 20:55:24 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.247 20:55:24 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:31.247 20:55:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:31.247 20:55:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:31.247 20:55:24 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:35:31.247 20:55:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:36.522 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:36.522 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:36.522 Found net devices under 0000:af:00.0: cvl_0_0 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:36.522 Found net devices under 0000:af:00.1: cvl_0_1 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:36.522 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:36.523 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:36.523 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:36.523 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:36.523 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:36.523 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:36.523 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:36.523 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:36.523 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:36.523 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:36.523 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:36.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:36.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:35:36.523 00:35:36.523 --- 10.0.0.2 ping statistics --- 00:35:36.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:36.523 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:35:36.523 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:36.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:36.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:35:36.523 00:35:36.523 --- 10.0.0.1 ping statistics --- 00:35:36.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:36.523 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:35:36.523 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:36.523 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:35:36.523 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:36.523 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:36.523 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:36.523 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:36.523 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:36.523 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:36.523 20:55:29 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:36.782 20:55:29 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:36.782 20:55:29 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:36.782 20:55:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:36.782 20:55:29 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:36.782 20:55:29 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:35:36.782 20:55:29 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:35:36.782 20:55:29 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:35:36.782 20:55:29 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:35:36.782 20:55:29 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:35:36.782 20:55:30 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:35:36.782 20:55:30 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:36.782 20:55:30 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:36.782 20:55:30 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:35:36.782 20:55:30 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:35:36.782 20:55:30 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:86:00.0 00:35:36.782 20:55:30 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:86:00.0 00:35:36.782 20:55:30 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:86:00.0 00:35:36.782 20:55:30 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:86:00.0 ']' 00:35:36.782 20:55:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:86:00.0' -i 0 00:35:36.782 20:55:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:36.782 20:55:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:40.972 20:55:34 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ916308MR1P0FGN 00:35:40.972 20:55:34 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:86:00.0' -i 0 00:35:40.972 20:55:34 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:40.972 20:55:34 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:45.159 20:55:38 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:35:45.159 20:55:38 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:45.159 20:55:38 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:45.159 20:55:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:45.159 20:55:38 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:45.159 20:55:38 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:45.159 20:55:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:45.159 20:55:38 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=630400 00:35:45.159 20:55:38 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:45.159 20:55:38 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:45.159 20:55:38 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 630400 00:35:45.159 20:55:38 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 630400 ']' 00:35:45.159 20:55:38 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:45.159 20:55:38 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:45.159 20:55:38 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:45.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:45.159 20:55:38 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:45.159 20:55:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:45.420 [2024-12-05 20:55:38.628943] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:35:45.420 [2024-12-05 20:55:38.628987] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:45.420 [2024-12-05 20:55:38.703943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:45.420 [2024-12-05 20:55:38.742033] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:45.420 [2024-12-05 20:55:38.742073] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:45.420 [2024-12-05 20:55:38.742079] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:45.420 [2024-12-05 20:55:38.742084] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:45.420 [2024-12-05 20:55:38.742089] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:45.420 [2024-12-05 20:55:38.743572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:45.420 [2024-12-05 20:55:38.743684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:45.420 [2024-12-05 20:55:38.743711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:45.420 [2024-12-05 20:55:38.743712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:46.349 20:55:39 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:46.349 20:55:39 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:35:46.349 20:55:39 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:46.349 20:55:39 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.349 20:55:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:46.349 INFO: Log level set to 20 00:35:46.349 INFO: Requests: 00:35:46.349 { 00:35:46.349 "jsonrpc": "2.0", 00:35:46.349 "method": "nvmf_set_config", 00:35:46.349 "id": 1, 00:35:46.349 "params": { 00:35:46.349 "admin_cmd_passthru": { 00:35:46.349 "identify_ctrlr": true 00:35:46.349 } 00:35:46.349 } 00:35:46.349 } 00:35:46.349 00:35:46.349 INFO: response: 00:35:46.349 { 00:35:46.349 "jsonrpc": "2.0", 00:35:46.349 "id": 1, 00:35:46.349 "result": true 00:35:46.349 } 00:35:46.349 00:35:46.349 20:55:39 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.349 20:55:39 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:46.349 20:55:39 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.349 20:55:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:46.349 INFO: Setting log level to 20 00:35:46.349 INFO: Setting log level to 20 00:35:46.349 INFO: Log level set to 20 00:35:46.349 INFO: Log level set to 20 00:35:46.349 INFO: Requests: 00:35:46.349 { 00:35:46.350 "jsonrpc": "2.0", 00:35:46.350 "method": "framework_start_init", 00:35:46.350 "id": 1 00:35:46.350 } 00:35:46.350 00:35:46.350 INFO: Requests: 00:35:46.350 { 00:35:46.350 "jsonrpc": "2.0", 00:35:46.350 "method": "framework_start_init", 00:35:46.350 "id": 1 00:35:46.350 } 00:35:46.350 00:35:46.350 [2024-12-05 20:55:39.521570] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:46.350 INFO: response: 00:35:46.350 { 00:35:46.350 "jsonrpc": "2.0", 00:35:46.350 "id": 1, 00:35:46.350 "result": true 00:35:46.350 } 00:35:46.350 00:35:46.350 INFO: response: 00:35:46.350 { 00:35:46.350 "jsonrpc": "2.0", 00:35:46.350 "id": 1, 00:35:46.350 "result": true 00:35:46.350 } 00:35:46.350 00:35:46.350 20:55:39 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.350 20:55:39 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:46.350 20:55:39 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.350 20:55:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:46.350 INFO: Setting log level to 40 00:35:46.350 INFO: Setting log level to 40 00:35:46.350 INFO: Setting log level to 40 00:35:46.350 [2024-12-05 20:55:39.534825] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:46.350 20:55:39 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.350 20:55:39 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:46.350 20:55:39 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:46.350 20:55:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:46.350 20:55:39 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:86:00.0 00:35:46.350 20:55:39 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.350 20:55:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:49.624 Nvme0n1 00:35:49.624 20:55:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.624 20:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:49.624 20:55:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.624 20:55:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:49.624 20:55:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.624 20:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:49.624 20:55:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.624 20:55:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:49.624 20:55:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.624 20:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:49.624 20:55:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.624 20:55:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:49.624 [2024-12-05 20:55:42.451661] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:49.624 20:55:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.624 20:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:49.624 20:55:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.624 20:55:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:49.624 [ 00:35:49.624 { 00:35:49.624 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:49.624 "subtype": "Discovery", 00:35:49.624 "listen_addresses": [], 00:35:49.624 "allow_any_host": true, 00:35:49.624 "hosts": [] 00:35:49.624 }, 00:35:49.624 { 00:35:49.624 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:49.624 "subtype": "NVMe", 00:35:49.624 "listen_addresses": [ 00:35:49.624 { 00:35:49.624 "trtype": "TCP", 00:35:49.624 "adrfam": "IPv4", 00:35:49.624 "traddr": "10.0.0.2", 00:35:49.624 "trsvcid": "4420" 00:35:49.624 } 00:35:49.624 ], 00:35:49.624 "allow_any_host": true, 00:35:49.624 "hosts": [], 00:35:49.624 "serial_number": "SPDK00000000000001", 00:35:49.624 "model_number": "SPDK bdev Controller", 00:35:49.624 "max_namespaces": 1, 00:35:49.624 "min_cntlid": 1, 00:35:49.624 "max_cntlid": 65519, 00:35:49.624 "namespaces": [ 00:35:49.624 { 00:35:49.624 "nsid": 1, 00:35:49.624 "bdev_name": "Nvme0n1", 00:35:49.624 "name": "Nvme0n1", 00:35:49.624 "nguid": "8BF764302F104C49A3D6B46C4FBD55D2", 00:35:49.624 "uuid": "8bf76430-2f10-4c49-a3d6-b46c4fbd55d2" 00:35:49.624 } 00:35:49.624 ] 00:35:49.624 } 00:35:49.624 ] 00:35:49.624 20:55:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.624 20:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:49.624 20:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:49.624 20:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:49.624 20:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ916308MR1P0FGN 00:35:49.624 20:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:49.624 20:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:49.624 20:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:49.624 20:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:35:49.624 20:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ916308MR1P0FGN '!=' BTLJ916308MR1P0FGN ']' 00:35:49.624 20:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:35:49.624 20:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:49.624 20:55:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.624 20:55:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:49.624 20:55:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.624 20:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:49.624 20:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:49.624 20:55:42 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:49.624 20:55:42 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:35:49.624 20:55:42 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:49.624 20:55:42 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:35:49.624 20:55:42 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:49.624 20:55:42 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:49.624 rmmod nvme_tcp 00:35:49.624 rmmod nvme_fabrics 00:35:49.624 rmmod nvme_keyring 00:35:49.624 20:55:42 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:49.624 20:55:42 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:35:49.624 20:55:42 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:35:49.624 20:55:42 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 630400 ']' 00:35:49.624 20:55:42 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 630400 00:35:49.624 20:55:42 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 630400 ']' 00:35:49.624 20:55:42 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 630400 00:35:49.624 20:55:42 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:35:49.624 20:55:42 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:49.624 20:55:42 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 630400 00:35:49.624 20:55:42 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:49.624 20:55:42 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:49.624 20:55:42 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 630400' 00:35:49.624 killing process with pid 630400 00:35:49.624 20:55:42 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 630400 00:35:49.624 20:55:42 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 630400 00:35:50.999 20:55:44 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:50.999 20:55:44 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:50.999 20:55:44 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:50.999 20:55:44 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:35:50.999 20:55:44 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:35:50.999 20:55:44 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:50.999 20:55:44 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:35:50.999 20:55:44 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:50.999 20:55:44 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:50.999 20:55:44 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:50.999 20:55:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:50.999 20:55:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:53.533 20:55:46 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:53.533 00:35:53.533 real 0m22.659s 00:35:53.533 user 0m29.582s 00:35:53.533 sys 0m6.242s 00:35:53.533 20:55:46 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:53.533 20:55:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:53.533 ************************************ 00:35:53.533 END TEST nvmf_identify_passthru 00:35:53.533 ************************************ 00:35:53.533 20:55:46 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:53.533 20:55:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:53.533 20:55:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:53.533 20:55:46 -- common/autotest_common.sh@10 -- # set +x 00:35:53.533 ************************************ 00:35:53.533 START TEST nvmf_dif 00:35:53.533 ************************************ 00:35:53.533 20:55:46 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:53.533 * Looking for test storage... 00:35:53.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:53.533 20:55:46 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:53.533 20:55:46 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:35:53.533 20:55:46 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:53.533 20:55:46 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:35:53.533 20:55:46 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:53.533 20:55:46 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:53.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.533 --rc genhtml_branch_coverage=1 00:35:53.533 --rc genhtml_function_coverage=1 00:35:53.533 --rc genhtml_legend=1 00:35:53.533 --rc geninfo_all_blocks=1 00:35:53.533 --rc geninfo_unexecuted_blocks=1 00:35:53.533 00:35:53.533 ' 00:35:53.533 20:55:46 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:53.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.533 --rc genhtml_branch_coverage=1 00:35:53.533 --rc genhtml_function_coverage=1 00:35:53.533 --rc genhtml_legend=1 00:35:53.533 --rc geninfo_all_blocks=1 00:35:53.533 --rc geninfo_unexecuted_blocks=1 00:35:53.533 00:35:53.533 ' 00:35:53.533 20:55:46 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:53.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.533 --rc genhtml_branch_coverage=1 00:35:53.533 --rc genhtml_function_coverage=1 00:35:53.533 --rc genhtml_legend=1 00:35:53.533 --rc geninfo_all_blocks=1 00:35:53.533 --rc geninfo_unexecuted_blocks=1 00:35:53.533 00:35:53.533 ' 00:35:53.533 20:55:46 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:53.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.533 --rc genhtml_branch_coverage=1 00:35:53.533 --rc genhtml_function_coverage=1 00:35:53.533 --rc genhtml_legend=1 00:35:53.533 --rc geninfo_all_blocks=1 00:35:53.533 --rc geninfo_unexecuted_blocks=1 00:35:53.533 00:35:53.533 ' 00:35:53.533 20:55:46 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:53.533 20:55:46 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:53.533 20:55:46 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:53.533 20:55:46 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:53.533 20:55:46 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:53.533 20:55:46 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:53.533 20:55:46 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:53.533 20:55:46 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:53.533 20:55:46 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:53.533 20:55:46 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:53.533 20:55:46 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:53.533 20:55:46 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:53.533 20:55:46 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:35:53.533 20:55:46 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:35:53.533 20:55:46 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:53.533 20:55:46 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:53.533 20:55:46 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:53.533 20:55:46 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:53.533 20:55:46 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:53.533 20:55:46 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:53.534 20:55:46 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.534 20:55:46 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.534 20:55:46 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.534 20:55:46 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:53.534 20:55:46 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.534 20:55:46 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:35:53.534 20:55:46 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:53.534 20:55:46 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:53.534 20:55:46 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:53.534 20:55:46 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:53.534 20:55:46 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:53.534 20:55:46 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:53.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:53.534 20:55:46 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:53.534 20:55:46 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:53.534 20:55:46 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:53.534 20:55:46 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:53.534 20:55:46 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:53.534 20:55:46 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:53.534 20:55:46 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:53.534 20:55:46 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:53.534 20:55:46 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:53.534 20:55:46 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:53.534 20:55:46 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:53.534 20:55:46 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:53.534 20:55:46 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:53.534 20:55:46 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:53.534 20:55:46 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:53.534 20:55:46 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:53.534 20:55:46 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:53.534 20:55:46 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:53.534 20:55:46 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:35:53.534 20:55:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:00.101 20:55:52 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:00.101 20:55:52 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:36:00.101 20:55:52 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:00.101 20:55:52 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:00.101 20:55:52 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:00.101 20:55:52 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:00.101 20:55:52 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:00.101 20:55:52 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:36:00.101 20:55:52 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:00.101 20:55:52 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:36:00.101 20:55:52 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:36:00.101 20:55:52 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:36:00.101 20:55:52 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:36:00.101 20:55:52 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:00.102 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:00.102 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:00.102 Found net devices under 0000:af:00.0: cvl_0_0 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:00.102 Found net devices under 0000:af:00.1: cvl_0_1 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:00.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:00.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:36:00.102 00:36:00.102 --- 10.0.0.2 ping statistics --- 00:36:00.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:00.102 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:36:00.102 20:55:52 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:00.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:00.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:36:00.103 00:36:00.103 --- 10.0.0.1 ping statistics --- 00:36:00.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:00.103 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:36:00.103 20:55:52 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:00.103 20:55:52 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:36:00.103 20:55:52 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:00.103 20:55:52 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:02.007 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:36:02.007 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:02.007 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:36:02.007 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:36:02.007 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:36:02.007 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:36:02.007 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:36:02.007 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:36:02.007 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:36:02.007 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:36:02.007 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:36:02.007 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:36:02.007 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:36:02.007 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:36:02.007 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:36:02.007 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:36:02.007 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:36:02.266 20:55:55 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:02.266 20:55:55 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:02.266 20:55:55 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:02.266 20:55:55 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:02.266 20:55:55 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:02.266 20:55:55 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:02.266 20:55:55 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:02.266 20:55:55 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:02.266 20:55:55 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:02.266 20:55:55 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:02.266 20:55:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:02.266 20:55:55 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=636241 00:36:02.266 20:55:55 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 636241 00:36:02.266 20:55:55 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:02.266 20:55:55 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 636241 ']' 00:36:02.266 20:55:55 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:02.266 20:55:55 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:02.266 20:55:55 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:02.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:02.266 20:55:55 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:02.266 20:55:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:02.266 [2024-12-05 20:55:55.614780] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:36:02.266 [2024-12-05 20:55:55.614815] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:02.266 [2024-12-05 20:55:55.694046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:02.524 [2024-12-05 20:55:55.733081] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:02.524 [2024-12-05 20:55:55.733114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:02.524 [2024-12-05 20:55:55.733121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:02.524 [2024-12-05 20:55:55.733127] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:02.524 [2024-12-05 20:55:55.733131] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:02.524 [2024-12-05 20:55:55.733686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:03.091 20:55:56 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:03.091 20:55:56 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:36:03.091 20:55:56 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:03.091 20:55:56 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:03.091 20:55:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:03.091 20:55:56 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:03.091 20:55:56 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:03.091 20:55:56 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:03.091 20:55:56 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.091 20:55:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:03.091 [2024-12-05 20:55:56.482452] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:03.091 20:55:56 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.091 20:55:56 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:03.091 20:55:56 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:03.091 20:55:56 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:03.091 20:55:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:03.091 ************************************ 00:36:03.091 START TEST fio_dif_1_default 00:36:03.091 ************************************ 00:36:03.091 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:36:03.091 20:55:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:03.091 20:55:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:03.091 20:55:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:03.091 20:55:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:03.091 20:55:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:03.091 20:55:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:03.091 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.091 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:03.351 bdev_null0 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:03.351 [2024-12-05 20:55:56.562792] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:03.351 { 00:36:03.351 "params": { 00:36:03.351 "name": "Nvme$subsystem", 00:36:03.351 "trtype": "$TEST_TRANSPORT", 00:36:03.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:03.351 "adrfam": "ipv4", 00:36:03.351 "trsvcid": "$NVMF_PORT", 00:36:03.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:03.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:03.351 "hdgst": ${hdgst:-false}, 00:36:03.351 "ddgst": ${ddgst:-false} 00:36:03.351 }, 00:36:03.351 "method": "bdev_nvme_attach_controller" 00:36:03.351 } 00:36:03.351 EOF 00:36:03.351 )") 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:03.351 "params": { 00:36:03.351 "name": "Nvme0", 00:36:03.351 "trtype": "tcp", 00:36:03.351 "traddr": "10.0.0.2", 00:36:03.351 "adrfam": "ipv4", 00:36:03.351 "trsvcid": "4420", 00:36:03.351 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:03.351 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:03.351 "hdgst": false, 00:36:03.351 "ddgst": false 00:36:03.351 }, 00:36:03.351 "method": "bdev_nvme_attach_controller" 00:36:03.351 }' 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:03.351 20:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:03.610 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:03.610 fio-3.35 00:36:03.610 Starting 1 thread 00:36:15.815 00:36:15.815 filename0: (groupid=0, jobs=1): err= 0: pid=636666: Thu Dec 5 20:56:07 2024 00:36:15.815 read: IOPS=192, BW=769KiB/s (788kB/s)(7712KiB/10026msec) 00:36:15.815 slat (nsec): min=5489, max=31869, avg=5721.40, stdev=823.64 00:36:15.815 clat (usec): min=340, max=43794, avg=20784.73, stdev=20329.37 00:36:15.815 lat (usec): min=346, max=43826, avg=20790.45, stdev=20329.33 00:36:15.815 clat percentiles (usec): 00:36:15.815 | 1.00th=[ 359], 5.00th=[ 363], 10.00th=[ 367], 20.00th=[ 375], 00:36:15.815 | 30.00th=[ 383], 40.00th=[ 392], 50.00th=[40633], 60.00th=[40633], 00:36:15.815 | 70.00th=[40633], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:36:15.815 | 99.00th=[41681], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:36:15.815 | 99.99th=[43779] 00:36:15.815 bw ( KiB/s): min= 736, max= 832, per=99.97%, avg=769.60, stdev=16.33, samples=20 00:36:15.815 iops : min= 184, max= 208, avg=192.40, stdev= 4.08, samples=20 00:36:15.815 lat (usec) : 500=49.79% 00:36:15.815 lat (msec) : 50=50.21% 00:36:15.815 cpu : usr=92.68%, sys=7.06%, ctx=8, majf=0, minf=0 00:36:15.815 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:15.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.815 issued rwts: total=1928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.815 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:15.815 00:36:15.815 Run status group 0 (all jobs): 00:36:15.815 READ: bw=769KiB/s (788kB/s), 769KiB/s-769KiB/s (788kB/s-788kB/s), io=7712KiB (7897kB), run=10026-10026msec 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.815 00:36:15.815 real 0m11.289s 00:36:15.815 user 0m18.354s 00:36:15.815 sys 0m1.056s 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:15.815 ************************************ 00:36:15.815 END TEST fio_dif_1_default 00:36:15.815 ************************************ 00:36:15.815 20:56:07 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:15.815 20:56:07 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:15.815 20:56:07 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:15.815 20:56:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:15.815 ************************************ 00:36:15.815 START TEST fio_dif_1_multi_subsystems 00:36:15.815 ************************************ 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:15.815 bdev_null0 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:15.815 [2024-12-05 20:56:07.909455] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:15.815 bdev_null1 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:15.815 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:15.816 { 00:36:15.816 "params": { 00:36:15.816 "name": "Nvme$subsystem", 00:36:15.816 "trtype": "$TEST_TRANSPORT", 00:36:15.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:15.816 "adrfam": "ipv4", 00:36:15.816 "trsvcid": "$NVMF_PORT", 00:36:15.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:15.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:15.816 "hdgst": ${hdgst:-false}, 00:36:15.816 "ddgst": ${ddgst:-false} 00:36:15.816 }, 00:36:15.816 "method": "bdev_nvme_attach_controller" 00:36:15.816 } 00:36:15.816 EOF 00:36:15.816 )") 00:36:15.816 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:15.816 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:15.816 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:15.816 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:15.816 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:15.816 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:15.816 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:36:15.816 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:15.816 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:15.816 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:15.816 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:15.816 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:15.816 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:15.816 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:15.816 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:36:15.816 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:15.816 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:15.816 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:15.816 { 00:36:15.816 "params": { 00:36:15.816 "name": "Nvme$subsystem", 00:36:15.816 "trtype": "$TEST_TRANSPORT", 00:36:15.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:15.816 "adrfam": "ipv4", 00:36:15.816 "trsvcid": "$NVMF_PORT", 00:36:15.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:15.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:15.816 "hdgst": ${hdgst:-false}, 00:36:15.816 "ddgst": ${ddgst:-false} 00:36:15.816 }, 00:36:15.816 "method": "bdev_nvme_attach_controller" 00:36:15.816 } 00:36:15.816 EOF 00:36:15.816 )") 00:36:15.816 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:15.816 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:15.816 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:15.816 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:36:15.816 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:36:15.816 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:15.816 "params": { 00:36:15.816 "name": "Nvme0", 00:36:15.816 "trtype": "tcp", 00:36:15.816 "traddr": "10.0.0.2", 00:36:15.816 "adrfam": "ipv4", 00:36:15.816 "trsvcid": "4420", 00:36:15.816 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:15.816 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:15.816 "hdgst": false, 00:36:15.816 "ddgst": false 00:36:15.816 }, 00:36:15.816 "method": "bdev_nvme_attach_controller" 00:36:15.816 },{ 00:36:15.816 "params": { 00:36:15.816 "name": "Nvme1", 00:36:15.816 "trtype": "tcp", 00:36:15.816 "traddr": "10.0.0.2", 00:36:15.816 "adrfam": "ipv4", 00:36:15.816 "trsvcid": "4420", 00:36:15.816 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:15.816 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:15.816 "hdgst": false, 00:36:15.816 "ddgst": false 00:36:15.816 }, 00:36:15.816 "method": "bdev_nvme_attach_controller" 00:36:15.816 }' 00:36:15.816 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:15.816 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:15.816 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:15.816 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:15.816 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:15.816 20:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:15.816 20:56:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:15.816 20:56:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:15.816 20:56:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:15.816 20:56:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:15.816 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:15.816 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:15.816 fio-3.35 00:36:15.816 Starting 2 threads 00:36:25.788 00:36:25.788 filename0: (groupid=0, jobs=1): err= 0: pid=638910: Thu Dec 5 20:56:18 2024 00:36:25.788 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10007msec) 00:36:25.788 slat (nsec): min=5421, max=26471, avg=7162.64, stdev=2631.37 00:36:25.788 clat (usec): min=40806, max=41989, avg=40987.76, stdev=105.26 00:36:25.788 lat (usec): min=40812, max=42000, avg=40994.93, stdev=105.47 00:36:25.788 clat percentiles (usec): 00:36:25.788 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:36:25.788 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:25.788 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:25.788 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:25.788 | 99.99th=[42206] 00:36:25.788 bw ( KiB/s): min= 384, max= 416, per=40.03%, avg=389.05, stdev=11.99, samples=19 00:36:25.788 iops : min= 96, max= 104, avg=97.26, stdev= 3.00, samples=19 00:36:25.788 lat (msec) : 50=100.00% 00:36:25.788 cpu : usr=96.89%, sys=2.86%, ctx=9, majf=0, minf=109 00:36:25.788 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:25.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.788 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.788 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.788 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:25.788 filename1: (groupid=0, jobs=1): err= 0: pid=638911: Thu Dec 5 20:56:18 2024 00:36:25.788 read: IOPS=145, BW=582KiB/s (596kB/s)(5824KiB/10010msec) 00:36:25.788 slat (nsec): min=5451, max=26172, avg=6704.83, stdev=2127.89 00:36:25.788 clat (usec): min=356, max=42546, avg=27478.36, stdev=19200.01 00:36:25.788 lat (usec): min=362, max=42552, avg=27485.06, stdev=19199.74 00:36:25.788 clat percentiles (usec): 00:36:25.788 | 1.00th=[ 379], 5.00th=[ 396], 10.00th=[ 412], 20.00th=[ 437], 00:36:25.788 | 30.00th=[ 562], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:36:25.788 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:36:25.788 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:36:25.788 | 99.99th=[42730] 00:36:25.788 bw ( KiB/s): min= 384, max= 864, per=59.68%, avg=580.80, stdev=192.92, samples=20 00:36:25.788 iops : min= 96, max= 216, avg=145.20, stdev=48.23, samples=20 00:36:25.788 lat (usec) : 500=26.92%, 750=6.59% 00:36:25.788 lat (msec) : 50=66.48% 00:36:25.788 cpu : usr=96.62%, sys=3.13%, ctx=13, majf=0, minf=30 00:36:25.788 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:25.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.788 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.788 issued rwts: total=1456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.788 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:25.788 00:36:25.788 Run status group 0 (all jobs): 00:36:25.788 READ: bw=972KiB/s (995kB/s), 390KiB/s-582KiB/s (399kB/s-596kB/s), io=9728KiB (9961kB), run=10007-10010msec 00:36:25.788 20:56:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:25.788 20:56:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:25.788 20:56:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:25.788 20:56:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:25.788 20:56:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:25.788 20:56:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:25.788 20:56:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.789 20:56:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:25.789 20:56:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.789 20:56:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:25.789 20:56:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.789 20:56:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:25.789 20:56:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.789 20:56:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:25.789 20:56:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:25.789 20:56:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:25.789 20:56:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:25.789 20:56:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.789 20:56:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:25.789 20:56:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.789 20:56:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:25.789 20:56:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.789 20:56:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:25.789 20:56:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.789 00:36:25.789 real 0m11.325s 00:36:25.789 user 0m28.689s 00:36:25.789 sys 0m0.980s 00:36:25.789 20:56:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:25.789 20:56:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:25.789 ************************************ 00:36:25.789 END TEST fio_dif_1_multi_subsystems 00:36:25.789 ************************************ 00:36:26.046 20:56:19 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:26.046 20:56:19 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:26.046 20:56:19 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:26.046 20:56:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:26.046 ************************************ 00:36:26.046 START TEST fio_dif_rand_params 00:36:26.046 ************************************ 00:36:26.046 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:36:26.046 20:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:26.046 20:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:26.046 20:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:26.046 20:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:26.046 20:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:26.046 20:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:26.046 20:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:26.046 20:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:26.046 20:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:26.047 bdev_null0 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:26.047 [2024-12-05 20:56:19.309262] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:26.047 { 00:36:26.047 "params": { 00:36:26.047 "name": "Nvme$subsystem", 00:36:26.047 "trtype": "$TEST_TRANSPORT", 00:36:26.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:26.047 "adrfam": "ipv4", 00:36:26.047 "trsvcid": "$NVMF_PORT", 00:36:26.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:26.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:26.047 "hdgst": ${hdgst:-false}, 00:36:26.047 "ddgst": ${ddgst:-false} 00:36:26.047 }, 00:36:26.047 "method": "bdev_nvme_attach_controller" 00:36:26.047 } 00:36:26.047 EOF 00:36:26.047 )") 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:26.047 "params": { 00:36:26.047 "name": "Nvme0", 00:36:26.047 "trtype": "tcp", 00:36:26.047 "traddr": "10.0.0.2", 00:36:26.047 "adrfam": "ipv4", 00:36:26.047 "trsvcid": "4420", 00:36:26.047 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:26.047 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:26.047 "hdgst": false, 00:36:26.047 "ddgst": false 00:36:26.047 }, 00:36:26.047 "method": "bdev_nvme_attach_controller" 00:36:26.047 }' 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:26.047 20:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:26.303 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:26.303 ... 00:36:26.303 fio-3.35 00:36:26.303 Starting 3 threads 00:36:32.855 00:36:32.855 filename0: (groupid=0, jobs=1): err= 0: pid=640899: Thu Dec 5 20:56:25 2024 00:36:32.855 read: IOPS=335, BW=42.0MiB/s (44.0MB/s)(210MiB/5008msec) 00:36:32.855 slat (nsec): min=5594, max=34569, avg=9978.54, stdev=2138.61 00:36:32.855 clat (usec): min=3473, max=50166, avg=8915.51, stdev=6084.47 00:36:32.855 lat (usec): min=3478, max=50201, avg=8925.49, stdev=6084.49 00:36:32.855 clat percentiles (usec): 00:36:32.855 | 1.00th=[ 5145], 5.00th=[ 5800], 10.00th=[ 6259], 20.00th=[ 7046], 00:36:32.855 | 30.00th=[ 7504], 40.00th=[ 7832], 50.00th=[ 8094], 60.00th=[ 8356], 00:36:32.855 | 70.00th=[ 8717], 80.00th=[ 9110], 90.00th=[ 9634], 95.00th=[10159], 00:36:32.855 | 99.00th=[47973], 99.50th=[48497], 99.90th=[50070], 99.95th=[50070], 00:36:32.855 | 99.99th=[50070] 00:36:32.855 bw ( KiB/s): min=19968, max=51968, per=34.98%, avg=43008.00, stdev=8927.84, samples=10 00:36:32.855 iops : min= 156, max= 406, avg=336.00, stdev=69.75, samples=10 00:36:32.855 lat (msec) : 4=0.54%, 10=92.93%, 20=4.22%, 50=2.14%, 100=0.18% 00:36:32.855 cpu : usr=94.31%, sys=5.39%, ctx=9, majf=0, minf=79 00:36:32.855 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:32.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.855 issued rwts: total=1682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.855 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:32.855 filename0: (groupid=0, jobs=1): err= 0: pid=640900: Thu Dec 5 20:56:25 2024 00:36:32.855 read: IOPS=323, BW=40.4MiB/s (42.3MB/s)(202MiB/5005msec) 00:36:32.855 slat (nsec): min=5598, max=42053, avg=10104.07, stdev=2063.63 00:36:32.855 clat (usec): min=2896, max=50686, avg=9271.33, stdev=5502.16 00:36:32.855 lat (usec): min=2902, max=50693, avg=9281.43, stdev=5502.43 00:36:32.855 clat percentiles (usec): 00:36:32.855 | 1.00th=[ 3392], 5.00th=[ 5276], 10.00th=[ 5997], 20.00th=[ 7177], 00:36:32.855 | 30.00th=[ 8029], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9241], 00:36:32.855 | 70.00th=[ 9634], 80.00th=[10159], 90.00th=[10683], 95.00th=[11207], 00:36:32.855 | 99.00th=[46400], 99.50th=[49546], 99.90th=[50594], 99.95th=[50594], 00:36:32.855 | 99.99th=[50594] 00:36:32.855 bw ( KiB/s): min=36608, max=46592, per=33.61%, avg=41329.78, stdev=2933.16, samples=9 00:36:32.855 iops : min= 286, max= 364, avg=322.89, stdev=22.92, samples=9 00:36:32.855 lat (msec) : 4=3.40%, 10=73.84%, 20=20.90%, 50=1.42%, 100=0.43% 00:36:32.855 cpu : usr=94.18%, sys=5.52%, ctx=10, majf=0, minf=52 00:36:32.855 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:32.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.855 issued rwts: total=1617,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.855 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:32.855 filename0: (groupid=0, jobs=1): err= 0: pid=640901: Thu Dec 5 20:56:25 2024 00:36:32.855 read: IOPS=306, BW=38.3MiB/s (40.2MB/s)(193MiB/5044msec) 00:36:32.855 slat (nsec): min=5591, max=29301, avg=9873.45, stdev=2124.73 00:36:32.855 clat (usec): min=3419, max=90806, avg=9749.40, stdev=7221.72 00:36:32.855 lat (usec): min=3425, max=90816, avg=9759.28, stdev=7221.64 00:36:32.855 clat percentiles (usec): 00:36:32.855 | 1.00th=[ 4015], 5.00th=[ 5538], 10.00th=[ 6325], 20.00th=[ 7570], 00:36:32.855 | 30.00th=[ 8029], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8979], 00:36:32.855 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[11076], 00:36:32.855 | 99.00th=[49021], 99.50th=[50070], 99.90th=[51643], 99.95th=[90702], 00:36:32.855 | 99.99th=[90702] 00:36:32.855 bw ( KiB/s): min=20224, max=48896, per=32.15%, avg=39526.40, stdev=8874.83, samples=10 00:36:32.855 iops : min= 158, max= 382, avg=308.80, stdev=69.33, samples=10 00:36:32.855 lat (msec) : 4=0.97%, 10=81.50%, 20=14.36%, 50=2.65%, 100=0.52% 00:36:32.855 cpu : usr=94.73%, sys=5.00%, ctx=10, majf=0, minf=20 00:36:32.855 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:32.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.855 issued rwts: total=1546,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.855 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:32.855 00:36:32.855 Run status group 0 (all jobs): 00:36:32.855 READ: bw=120MiB/s (126MB/s), 38.3MiB/s-42.0MiB/s (40.2MB/s-44.0MB/s), io=606MiB (635MB), run=5005-5044msec 00:36:32.855 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:32.855 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.856 bdev_null0 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.856 [2024-12-05 20:56:25.535457] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.856 bdev_null1 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.856 bdev_null2 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:32.856 { 00:36:32.856 "params": { 00:36:32.856 "name": "Nvme$subsystem", 00:36:32.856 "trtype": "$TEST_TRANSPORT", 00:36:32.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:32.856 "adrfam": "ipv4", 00:36:32.856 "trsvcid": "$NVMF_PORT", 00:36:32.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:32.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:32.856 "hdgst": ${hdgst:-false}, 00:36:32.856 "ddgst": ${ddgst:-false} 00:36:32.856 }, 00:36:32.856 "method": "bdev_nvme_attach_controller" 00:36:32.856 } 00:36:32.856 EOF 00:36:32.856 )") 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:32.856 20:56:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:32.856 { 00:36:32.856 "params": { 00:36:32.856 "name": "Nvme$subsystem", 00:36:32.856 "trtype": "$TEST_TRANSPORT", 00:36:32.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:32.856 "adrfam": "ipv4", 00:36:32.856 "trsvcid": "$NVMF_PORT", 00:36:32.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:32.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:32.857 "hdgst": ${hdgst:-false}, 00:36:32.857 "ddgst": ${ddgst:-false} 00:36:32.857 }, 00:36:32.857 "method": "bdev_nvme_attach_controller" 00:36:32.857 } 00:36:32.857 EOF 00:36:32.857 )") 00:36:32.857 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:32.857 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:32.857 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:32.857 20:56:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:32.857 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:32.857 20:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:32.857 20:56:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:32.857 20:56:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:32.857 { 00:36:32.857 "params": { 00:36:32.857 "name": "Nvme$subsystem", 00:36:32.857 "trtype": "$TEST_TRANSPORT", 00:36:32.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:32.857 "adrfam": "ipv4", 00:36:32.857 "trsvcid": "$NVMF_PORT", 00:36:32.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:32.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:32.857 "hdgst": ${hdgst:-false}, 00:36:32.857 "ddgst": ${ddgst:-false} 00:36:32.857 }, 00:36:32.857 "method": "bdev_nvme_attach_controller" 00:36:32.857 } 00:36:32.857 EOF 00:36:32.857 )") 00:36:32.857 20:56:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:32.857 20:56:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:32.857 20:56:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:32.857 20:56:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:32.857 "params": { 00:36:32.857 "name": "Nvme0", 00:36:32.857 "trtype": "tcp", 00:36:32.857 "traddr": "10.0.0.2", 00:36:32.857 "adrfam": "ipv4", 00:36:32.857 "trsvcid": "4420", 00:36:32.857 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:32.857 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:32.857 "hdgst": false, 00:36:32.857 "ddgst": false 00:36:32.857 }, 00:36:32.857 "method": "bdev_nvme_attach_controller" 00:36:32.857 },{ 00:36:32.857 "params": { 00:36:32.857 "name": "Nvme1", 00:36:32.857 "trtype": "tcp", 00:36:32.857 "traddr": "10.0.0.2", 00:36:32.857 "adrfam": "ipv4", 00:36:32.857 "trsvcid": "4420", 00:36:32.857 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:32.857 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:32.857 "hdgst": false, 00:36:32.857 "ddgst": false 00:36:32.857 }, 00:36:32.857 "method": "bdev_nvme_attach_controller" 00:36:32.857 },{ 00:36:32.857 "params": { 00:36:32.857 "name": "Nvme2", 00:36:32.857 "trtype": "tcp", 00:36:32.857 "traddr": "10.0.0.2", 00:36:32.857 "adrfam": "ipv4", 00:36:32.857 "trsvcid": "4420", 00:36:32.857 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:32.857 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:32.857 "hdgst": false, 00:36:32.857 "ddgst": false 00:36:32.857 }, 00:36:32.857 "method": "bdev_nvme_attach_controller" 00:36:32.857 }' 00:36:32.857 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:32.857 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:32.857 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:32.857 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:32.857 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:32.857 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:32.857 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:32.857 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:32.857 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:32.857 20:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:32.857 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:32.857 ... 00:36:32.857 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:32.857 ... 00:36:32.857 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:32.857 ... 00:36:32.857 fio-3.35 00:36:32.857 Starting 24 threads 00:36:45.048 00:36:45.048 filename0: (groupid=0, jobs=1): err= 0: pid=642106: Thu Dec 5 20:56:36 2024 00:36:45.048 read: IOPS=665, BW=2661KiB/s (2724kB/s)(26.0MiB/10007msec) 00:36:45.048 slat (nsec): min=7986, max=87152, avg=41939.02, stdev=14704.48 00:36:45.048 clat (usec): min=13093, max=30225, avg=23715.45, stdev=1810.16 00:36:45.048 lat (usec): min=13124, max=30248, avg=23757.39, stdev=1810.40 00:36:45.048 clat percentiles (usec): 00:36:45.048 | 1.00th=[21365], 5.00th=[21890], 10.00th=[22414], 20.00th=[22676], 00:36:45.048 | 30.00th=[22676], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:36:45.048 | 70.00th=[23462], 80.00th=[25297], 90.00th=[26870], 95.00th=[27657], 00:36:45.048 | 99.00th=[28181], 99.50th=[28181], 99.90th=[30016], 99.95th=[30278], 00:36:45.048 | 99.99th=[30278] 00:36:45.048 bw ( KiB/s): min= 2304, max= 2821, per=4.17%, avg=2654.58, stdev=159.03, samples=19 00:36:45.048 iops : min= 576, max= 705, avg=663.63, stdev=39.74, samples=19 00:36:45.048 lat (msec) : 20=0.48%, 50=99.52% 00:36:45.048 cpu : usr=97.99%, sys=1.08%, ctx=178, majf=0, minf=9 00:36:45.048 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:45.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.048 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.048 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.048 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.048 filename0: (groupid=0, jobs=1): err= 0: pid=642107: Thu Dec 5 20:56:36 2024 00:36:45.048 read: IOPS=666, BW=2666KiB/s (2730kB/s)(26.1MiB/10012msec) 00:36:45.048 slat (nsec): min=7019, max=83669, avg=28456.49, stdev=15620.73 00:36:45.048 clat (usec): min=11521, max=30303, avg=23800.09, stdev=1951.48 00:36:45.048 lat (usec): min=11565, max=30320, avg=23828.54, stdev=1951.26 00:36:45.048 clat percentiles (usec): 00:36:45.048 | 1.00th=[21365], 5.00th=[21890], 10.00th=[22414], 20.00th=[22938], 00:36:45.048 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23200], 60.00th=[23200], 00:36:45.048 | 70.00th=[23725], 80.00th=[25297], 90.00th=[27132], 95.00th=[27657], 00:36:45.048 | 99.00th=[28181], 99.50th=[28443], 99.90th=[30278], 99.95th=[30278], 00:36:45.048 | 99.99th=[30278] 00:36:45.048 bw ( KiB/s): min= 2304, max= 2944, per=4.18%, avg=2662.65, stdev=169.44, samples=20 00:36:45.048 iops : min= 576, max= 736, avg=665.65, stdev=42.35, samples=20 00:36:45.048 lat (msec) : 20=0.96%, 50=99.04% 00:36:45.048 cpu : usr=98.36%, sys=1.02%, ctx=110, majf=0, minf=9 00:36:45.048 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:45.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.048 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.048 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.048 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.048 filename0: (groupid=0, jobs=1): err= 0: pid=642108: Thu Dec 5 20:56:36 2024 00:36:45.048 read: IOPS=664, BW=2659KiB/s (2723kB/s)(26.0MiB/10013msec) 00:36:45.048 slat (nsec): min=5658, max=75672, avg=18318.42, stdev=12771.21 00:36:45.048 clat (usec): min=17519, max=28884, avg=23889.59, stdev=1731.72 00:36:45.048 lat (usec): min=17575, max=28899, avg=23907.91, stdev=1731.72 00:36:45.048 clat percentiles (usec): 00:36:45.048 | 1.00th=[21627], 5.00th=[21890], 10.00th=[22676], 20.00th=[22938], 00:36:45.048 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23200], 60.00th=[23200], 00:36:45.048 | 70.00th=[23725], 80.00th=[25297], 90.00th=[27132], 95.00th=[27657], 00:36:45.048 | 99.00th=[28181], 99.50th=[28443], 99.90th=[28705], 99.95th=[28967], 00:36:45.048 | 99.99th=[28967] 00:36:45.048 bw ( KiB/s): min= 2432, max= 2944, per=4.17%, avg=2654.32, stdev=146.83, samples=19 00:36:45.048 iops : min= 608, max= 736, avg=663.58, stdev=36.71, samples=19 00:36:45.048 lat (msec) : 20=0.24%, 50=99.76% 00:36:45.049 cpu : usr=98.51%, sys=0.94%, ctx=76, majf=0, minf=9 00:36:45.049 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:45.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.049 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.049 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.049 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.049 filename0: (groupid=0, jobs=1): err= 0: pid=642109: Thu Dec 5 20:56:36 2024 00:36:45.049 read: IOPS=664, BW=2659KiB/s (2723kB/s)(26.0MiB/10013msec) 00:36:45.049 slat (nsec): min=7283, max=94743, avg=43595.77, stdev=14238.68 00:36:45.049 clat (usec): min=12931, max=30304, avg=23704.71, stdev=1823.71 00:36:45.049 lat (usec): min=12943, max=30337, avg=23748.30, stdev=1823.84 00:36:45.049 clat percentiles (usec): 00:36:45.049 | 1.00th=[21365], 5.00th=[21890], 10.00th=[22414], 20.00th=[22676], 00:36:45.049 | 30.00th=[22676], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:36:45.049 | 70.00th=[23462], 80.00th=[25297], 90.00th=[26870], 95.00th=[27657], 00:36:45.049 | 99.00th=[28181], 99.50th=[28181], 99.90th=[30016], 99.95th=[30278], 00:36:45.049 | 99.99th=[30278] 00:36:45.049 bw ( KiB/s): min= 2432, max= 2944, per=4.17%, avg=2656.25, stdev=149.07, samples=20 00:36:45.049 iops : min= 608, max= 736, avg=664.05, stdev=37.27, samples=20 00:36:45.049 lat (msec) : 20=0.48%, 50=99.52% 00:36:45.049 cpu : usr=98.02%, sys=1.26%, ctx=93, majf=0, minf=9 00:36:45.049 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:45.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.049 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.049 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.049 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.049 filename0: (groupid=0, jobs=1): err= 0: pid=642110: Thu Dec 5 20:56:36 2024 00:36:45.049 read: IOPS=665, BW=2661KiB/s (2725kB/s)(26.0MiB/10005msec) 00:36:45.049 slat (nsec): min=4547, max=95998, avg=36561.71, stdev=16545.67 00:36:45.049 clat (usec): min=4620, max=47544, avg=23734.13, stdev=2388.87 00:36:45.049 lat (usec): min=4668, max=47557, avg=23770.70, stdev=2389.43 00:36:45.049 clat percentiles (usec): 00:36:45.049 | 1.00th=[21365], 5.00th=[21890], 10.00th=[22414], 20.00th=[22676], 00:36:45.049 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23200], 00:36:45.049 | 70.00th=[23462], 80.00th=[25297], 90.00th=[26870], 95.00th=[27395], 00:36:45.049 | 99.00th=[28181], 99.50th=[28705], 99.90th=[47449], 99.95th=[47449], 00:36:45.049 | 99.99th=[47449] 00:36:45.049 bw ( KiB/s): min= 2432, max= 2944, per=4.16%, avg=2647.84, stdev=147.97, samples=19 00:36:45.049 iops : min= 608, max= 736, avg=661.95, stdev=37.00, samples=19 00:36:45.049 lat (msec) : 10=0.48%, 20=0.24%, 50=99.28% 00:36:45.049 cpu : usr=98.95%, sys=0.67%, ctx=21, majf=0, minf=9 00:36:45.049 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:45.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.049 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.049 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.049 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.049 filename0: (groupid=0, jobs=1): err= 0: pid=642111: Thu Dec 5 20:56:36 2024 00:36:45.049 read: IOPS=663, BW=2656KiB/s (2719kB/s)(25.9MiB/10001msec) 00:36:45.049 slat (nsec): min=6258, max=92441, avg=33723.46, stdev=17181.63 00:36:45.049 clat (usec): min=10815, max=34363, avg=23823.41, stdev=1947.34 00:36:45.049 lat (usec): min=10823, max=34381, avg=23857.14, stdev=1949.15 00:36:45.049 clat percentiles (usec): 00:36:45.049 | 1.00th=[19530], 5.00th=[21890], 10.00th=[22414], 20.00th=[22676], 00:36:45.049 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23200], 00:36:45.049 | 70.00th=[23462], 80.00th=[25560], 90.00th=[26870], 95.00th=[27657], 00:36:45.049 | 99.00th=[28705], 99.50th=[31851], 99.90th=[34341], 99.95th=[34341], 00:36:45.049 | 99.99th=[34341] 00:36:45.049 bw ( KiB/s): min= 2304, max= 2944, per=4.16%, avg=2647.84, stdev=168.88, samples=19 00:36:45.049 iops : min= 576, max= 736, avg=661.95, stdev=42.23, samples=19 00:36:45.049 lat (msec) : 20=1.14%, 50=98.86% 00:36:45.049 cpu : usr=98.64%, sys=0.83%, ctx=49, majf=0, minf=9 00:36:45.049 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:36:45.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.049 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.049 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.049 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.049 filename0: (groupid=0, jobs=1): err= 0: pid=642112: Thu Dec 5 20:56:36 2024 00:36:45.049 read: IOPS=666, BW=2666KiB/s (2730kB/s)(26.1MiB/10012msec) 00:36:45.049 slat (nsec): min=7280, max=98141, avg=34109.48, stdev=17792.46 00:36:45.049 clat (usec): min=11391, max=30245, avg=23755.51, stdev=1946.59 00:36:45.049 lat (usec): min=11408, max=30263, avg=23789.62, stdev=1946.19 00:36:45.049 clat percentiles (usec): 00:36:45.049 | 1.00th=[21103], 5.00th=[21890], 10.00th=[22414], 20.00th=[22676], 00:36:45.049 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23200], 00:36:45.049 | 70.00th=[23725], 80.00th=[25297], 90.00th=[27132], 95.00th=[27657], 00:36:45.049 | 99.00th=[28181], 99.50th=[28443], 99.90th=[30016], 99.95th=[30278], 00:36:45.049 | 99.99th=[30278] 00:36:45.049 bw ( KiB/s): min= 2304, max= 2944, per=4.18%, avg=2662.65, stdev=169.44, samples=20 00:36:45.049 iops : min= 576, max= 736, avg=665.65, stdev=42.35, samples=20 00:36:45.049 lat (msec) : 20=0.96%, 50=99.04% 00:36:45.049 cpu : usr=97.99%, sys=1.25%, ctx=165, majf=0, minf=9 00:36:45.049 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:45.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.049 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.049 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.049 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.049 filename0: (groupid=0, jobs=1): err= 0: pid=642113: Thu Dec 5 20:56:36 2024 00:36:45.049 read: IOPS=665, BW=2664KiB/s (2728kB/s)(26.0MiB/10004msec) 00:36:45.049 slat (nsec): min=5938, max=88184, avg=30588.20, stdev=14946.92 00:36:45.049 clat (usec): min=6166, max=47528, avg=23762.29, stdev=2452.21 00:36:45.049 lat (usec): min=6173, max=47542, avg=23792.88, stdev=2453.05 00:36:45.049 clat percentiles (usec): 00:36:45.049 | 1.00th=[18482], 5.00th=[21627], 10.00th=[22414], 20.00th=[22676], 00:36:45.049 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23200], 00:36:45.049 | 70.00th=[23462], 80.00th=[25560], 90.00th=[26870], 95.00th=[27657], 00:36:45.049 | 99.00th=[28181], 99.50th=[28967], 99.90th=[47449], 99.95th=[47449], 00:36:45.049 | 99.99th=[47449] 00:36:45.049 bw ( KiB/s): min= 2416, max= 2944, per=4.16%, avg=2650.37, stdev=159.78, samples=19 00:36:45.049 iops : min= 604, max= 736, avg=662.58, stdev=39.95, samples=19 00:36:45.049 lat (msec) : 10=0.24%, 20=1.14%, 50=98.62% 00:36:45.049 cpu : usr=98.26%, sys=1.15%, ctx=137, majf=0, minf=9 00:36:45.049 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:45.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.049 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.049 issued rwts: total=6662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.049 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.049 filename1: (groupid=0, jobs=1): err= 0: pid=642114: Thu Dec 5 20:56:36 2024 00:36:45.049 read: IOPS=665, BW=2660KiB/s (2724kB/s)(26.0MiB/10008msec) 00:36:45.049 slat (usec): min=4, max=100, avg=42.53, stdev=17.71 00:36:45.049 clat (usec): min=13588, max=30291, avg=23710.48, stdev=1778.23 00:36:45.049 lat (usec): min=13619, max=30312, avg=23753.01, stdev=1780.35 00:36:45.049 clat percentiles (usec): 00:36:45.049 | 1.00th=[21365], 5.00th=[21890], 10.00th=[22414], 20.00th=[22676], 00:36:45.049 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23200], 00:36:45.049 | 70.00th=[23462], 80.00th=[25297], 90.00th=[26870], 95.00th=[27395], 00:36:45.049 | 99.00th=[28181], 99.50th=[28181], 99.90th=[30016], 99.95th=[30278], 00:36:45.049 | 99.99th=[30278] 00:36:45.049 bw ( KiB/s): min= 2304, max= 2816, per=4.17%, avg=2654.32, stdev=158.74, samples=19 00:36:45.049 iops : min= 576, max= 704, avg=663.58, stdev=39.69, samples=19 00:36:45.049 lat (msec) : 20=0.48%, 50=99.52% 00:36:45.049 cpu : usr=98.74%, sys=0.87%, ctx=15, majf=0, minf=9 00:36:45.049 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:45.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.050 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.050 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.050 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.050 filename1: (groupid=0, jobs=1): err= 0: pid=642115: Thu Dec 5 20:56:36 2024 00:36:45.050 read: IOPS=663, BW=2656KiB/s (2719kB/s)(25.9MiB/10001msec) 00:36:45.050 slat (nsec): min=4794, max=95941, avg=36809.18, stdev=15924.01 00:36:45.050 clat (usec): min=8650, max=51162, avg=23766.81, stdev=2248.12 00:36:45.050 lat (usec): min=8693, max=51176, avg=23803.62, stdev=2248.29 00:36:45.050 clat percentiles (usec): 00:36:45.050 | 1.00th=[21365], 5.00th=[21890], 10.00th=[22414], 20.00th=[22676], 00:36:45.050 | 30.00th=[22938], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:36:45.050 | 70.00th=[23462], 80.00th=[25297], 90.00th=[26870], 95.00th=[27395], 00:36:45.050 | 99.00th=[28181], 99.50th=[28705], 99.90th=[48497], 99.95th=[48497], 00:36:45.050 | 99.99th=[51119] 00:36:45.050 bw ( KiB/s): min= 2432, max= 2944, per=4.16%, avg=2647.58, stdev=135.28, samples=19 00:36:45.050 iops : min= 608, max= 736, avg=661.89, stdev=33.82, samples=19 00:36:45.050 lat (msec) : 10=0.24%, 20=0.24%, 50=99.49%, 100=0.03% 00:36:45.050 cpu : usr=98.85%, sys=0.76%, ctx=18, majf=0, minf=9 00:36:45.050 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:45.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.050 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.050 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.050 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.050 filename1: (groupid=0, jobs=1): err= 0: pid=642116: Thu Dec 5 20:56:36 2024 00:36:45.050 read: IOPS=665, BW=2661KiB/s (2725kB/s)(26.0MiB/10005msec) 00:36:45.050 slat (nsec): min=7075, max=88947, avg=32335.15, stdev=15144.07 00:36:45.050 clat (usec): min=4642, max=47528, avg=23790.76, stdev=2419.72 00:36:45.050 lat (usec): min=4683, max=47542, avg=23823.10, stdev=2418.45 00:36:45.050 clat percentiles (usec): 00:36:45.050 | 1.00th=[21365], 5.00th=[21890], 10.00th=[22414], 20.00th=[22676], 00:36:45.050 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23200], 00:36:45.050 | 70.00th=[23725], 80.00th=[25560], 90.00th=[27132], 95.00th=[27657], 00:36:45.050 | 99.00th=[28443], 99.50th=[28705], 99.90th=[47449], 99.95th=[47449], 00:36:45.050 | 99.99th=[47449] 00:36:45.050 bw ( KiB/s): min= 2432, max= 2944, per=4.16%, avg=2647.84, stdev=147.97, samples=19 00:36:45.050 iops : min= 608, max= 736, avg=661.95, stdev=37.00, samples=19 00:36:45.050 lat (msec) : 10=0.48%, 20=0.33%, 50=99.19% 00:36:45.050 cpu : usr=97.76%, sys=1.50%, ctx=134, majf=0, minf=9 00:36:45.050 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:45.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.050 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.050 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.050 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.050 filename1: (groupid=0, jobs=1): err= 0: pid=642117: Thu Dec 5 20:56:36 2024 00:36:45.050 read: IOPS=671, BW=2685KiB/s (2749kB/s)(26.2MiB/10010msec) 00:36:45.050 slat (nsec): min=3874, max=76195, avg=15215.43, stdev=9384.29 00:36:45.050 clat (usec): min=8208, max=41891, avg=23758.72, stdev=2458.78 00:36:45.050 lat (usec): min=8217, max=41899, avg=23773.93, stdev=2458.17 00:36:45.050 clat percentiles (usec): 00:36:45.050 | 1.00th=[15008], 5.00th=[21365], 10.00th=[22152], 20.00th=[22938], 00:36:45.050 | 30.00th=[23200], 40.00th=[23200], 50.00th=[23200], 60.00th=[23200], 00:36:45.050 | 70.00th=[23725], 80.00th=[25560], 90.00th=[27395], 95.00th=[27919], 00:36:45.050 | 99.00th=[28443], 99.50th=[29492], 99.90th=[36439], 99.95th=[41681], 00:36:45.050 | 99.99th=[41681] 00:36:45.050 bw ( KiB/s): min= 2384, max= 2944, per=4.20%, avg=2671.16, stdev=156.63, samples=19 00:36:45.050 iops : min= 596, max= 736, avg=667.79, stdev=39.16, samples=19 00:36:45.050 lat (msec) : 10=0.18%, 20=3.48%, 50=96.34% 00:36:45.050 cpu : usr=97.98%, sys=1.32%, ctx=65, majf=0, minf=9 00:36:45.050 IO depths : 1=0.6%, 2=1.4%, 4=3.4%, 8=77.5%, 16=17.0%, 32=0.0%, >=64=0.0% 00:36:45.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.050 complete : 0=0.0%, 4=90.0%, 8=9.1%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.050 issued rwts: total=6718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.050 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.050 filename1: (groupid=0, jobs=1): err= 0: pid=642118: Thu Dec 5 20:56:36 2024 00:36:45.050 read: IOPS=669, BW=2679KiB/s (2743kB/s)(26.2MiB/10009msec) 00:36:45.050 slat (nsec): min=6101, max=77809, avg=18961.03, stdev=10342.54 00:36:45.050 clat (usec): min=7650, max=40868, avg=23747.07, stdev=3762.62 00:36:45.050 lat (usec): min=7658, max=40877, avg=23766.03, stdev=3762.75 00:36:45.050 clat percentiles (usec): 00:36:45.050 | 1.00th=[12649], 5.00th=[17957], 10.00th=[21627], 20.00th=[22676], 00:36:45.050 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23200], 60.00th=[23200], 00:36:45.050 | 70.00th=[23725], 80.00th=[25822], 90.00th=[27657], 95.00th=[28705], 00:36:45.050 | 99.00th=[36439], 99.50th=[40109], 99.90th=[40633], 99.95th=[40633], 00:36:45.050 | 99.99th=[40633] 00:36:45.050 bw ( KiB/s): min= 2304, max= 2848, per=4.18%, avg=2662.74, stdev=162.85, samples=19 00:36:45.050 iops : min= 576, max= 712, avg=665.68, stdev=40.71, samples=19 00:36:45.050 lat (msec) : 10=0.06%, 20=7.19%, 50=92.75% 00:36:45.050 cpu : usr=98.82%, sys=0.79%, ctx=18, majf=0, minf=9 00:36:45.050 IO depths : 1=3.6%, 2=8.3%, 4=20.6%, 8=58.5%, 16=9.0%, 32=0.0%, >=64=0.0% 00:36:45.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.050 complete : 0=0.0%, 4=93.1%, 8=1.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.050 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.050 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.050 filename1: (groupid=0, jobs=1): err= 0: pid=642119: Thu Dec 5 20:56:36 2024 00:36:45.050 read: IOPS=665, BW=2660KiB/s (2724kB/s)(26.0MiB/10008msec) 00:36:45.050 slat (usec): min=7, max=102, avg=45.42, stdev=17.74 00:36:45.050 clat (usec): min=13068, max=30259, avg=23636.38, stdev=1779.67 00:36:45.050 lat (usec): min=13092, max=30314, avg=23681.80, stdev=1783.08 00:36:45.050 clat percentiles (usec): 00:36:45.050 | 1.00th=[21365], 5.00th=[21890], 10.00th=[22414], 20.00th=[22676], 00:36:45.050 | 30.00th=[22676], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:36:45.050 | 70.00th=[23462], 80.00th=[25035], 90.00th=[26870], 95.00th=[27395], 00:36:45.050 | 99.00th=[27919], 99.50th=[28181], 99.90th=[30016], 99.95th=[30016], 00:36:45.050 | 99.99th=[30278] 00:36:45.050 bw ( KiB/s): min= 2304, max= 2821, per=4.18%, avg=2662.65, stdev=158.94, samples=20 00:36:45.050 iops : min= 576, max= 705, avg=665.65, stdev=39.72, samples=20 00:36:45.050 lat (msec) : 20=0.48%, 50=99.52% 00:36:45.050 cpu : usr=98.99%, sys=0.64%, ctx=18, majf=0, minf=9 00:36:45.050 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:45.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.050 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.050 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.050 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.050 filename1: (groupid=0, jobs=1): err= 0: pid=642120: Thu Dec 5 20:56:36 2024 00:36:45.050 read: IOPS=663, BW=2655KiB/s (2718kB/s)(25.9MiB/10005msec) 00:36:45.050 slat (nsec): min=5659, max=75477, avg=18188.99, stdev=12962.49 00:36:45.050 clat (usec): min=12058, max=33998, avg=23920.21, stdev=1891.70 00:36:45.050 lat (usec): min=12068, max=34006, avg=23938.40, stdev=1892.72 00:36:45.050 clat percentiles (usec): 00:36:45.050 | 1.00th=[21627], 5.00th=[22152], 10.00th=[22676], 20.00th=[22938], 00:36:45.050 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23200], 00:36:45.050 | 70.00th=[23725], 80.00th=[25297], 90.00th=[27395], 95.00th=[27657], 00:36:45.050 | 99.00th=[28181], 99.50th=[31065], 99.90th=[32375], 99.95th=[33162], 00:36:45.050 | 99.99th=[33817] 00:36:45.050 bw ( KiB/s): min= 2304, max= 2816, per=4.16%, avg=2647.58, stdev=154.15, samples=19 00:36:45.050 iops : min= 576, max= 704, avg=661.89, stdev=38.54, samples=19 00:36:45.050 lat (msec) : 20=0.30%, 50=99.70% 00:36:45.050 cpu : usr=98.87%, sys=0.74%, ctx=15, majf=0, minf=9 00:36:45.050 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:36:45.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.051 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.051 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.051 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.051 filename1: (groupid=0, jobs=1): err= 0: pid=642121: Thu Dec 5 20:56:36 2024 00:36:45.051 read: IOPS=666, BW=2666KiB/s (2730kB/s)(26.1MiB/10012msec) 00:36:45.051 slat (nsec): min=6853, max=79976, avg=17868.94, stdev=11790.96 00:36:45.051 clat (usec): min=11501, max=32590, avg=23868.72, stdev=2058.96 00:36:45.051 lat (usec): min=11538, max=32629, avg=23886.59, stdev=2057.69 00:36:45.051 clat percentiles (usec): 00:36:45.051 | 1.00th=[18220], 5.00th=[21890], 10.00th=[22676], 20.00th=[22938], 00:36:45.051 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23200], 60.00th=[23462], 00:36:45.051 | 70.00th=[23725], 80.00th=[25560], 90.00th=[27132], 95.00th=[27657], 00:36:45.051 | 99.00th=[28181], 99.50th=[28705], 99.90th=[31327], 99.95th=[32113], 00:36:45.051 | 99.99th=[32637] 00:36:45.051 bw ( KiB/s): min= 2304, max= 2944, per=4.18%, avg=2662.65, stdev=169.44, samples=20 00:36:45.051 iops : min= 576, max= 736, avg=665.65, stdev=42.35, samples=20 00:36:45.051 lat (msec) : 20=1.20%, 50=98.80% 00:36:45.051 cpu : usr=98.67%, sys=0.80%, ctx=62, majf=0, minf=9 00:36:45.051 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:45.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.051 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.051 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.051 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.051 filename2: (groupid=0, jobs=1): err= 0: pid=642122: Thu Dec 5 20:56:36 2024 00:36:45.051 read: IOPS=665, BW=2661KiB/s (2724kB/s)(26.0MiB/10007msec) 00:36:45.051 slat (nsec): min=6194, max=87009, avg=43109.70, stdev=14585.07 00:36:45.051 clat (usec): min=7508, max=33204, avg=23673.70, stdev=2007.12 00:36:45.051 lat (usec): min=7515, max=33222, avg=23716.81, stdev=2008.69 00:36:45.051 clat percentiles (usec): 00:36:45.051 | 1.00th=[21365], 5.00th=[21890], 10.00th=[22414], 20.00th=[22676], 00:36:45.051 | 30.00th=[22676], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:36:45.051 | 70.00th=[23462], 80.00th=[25297], 90.00th=[26870], 95.00th=[27395], 00:36:45.051 | 99.00th=[28181], 99.50th=[28705], 99.90th=[33162], 99.95th=[33162], 00:36:45.051 | 99.99th=[33162] 00:36:45.051 bw ( KiB/s): min= 2432, max= 2816, per=4.16%, avg=2647.58, stdev=135.28, samples=19 00:36:45.051 iops : min= 608, max= 704, avg=661.89, stdev=33.82, samples=19 00:36:45.051 lat (msec) : 10=0.24%, 20=0.24%, 50=99.52% 00:36:45.051 cpu : usr=97.69%, sys=1.45%, ctx=193, majf=0, minf=9 00:36:45.051 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:45.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.051 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.051 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.051 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.051 filename2: (groupid=0, jobs=1): err= 0: pid=642123: Thu Dec 5 20:56:36 2024 00:36:45.051 read: IOPS=663, BW=2655KiB/s (2719kB/s)(25.9MiB/10004msec) 00:36:45.051 slat (nsec): min=3503, max=88214, avg=35781.15, stdev=14827.21 00:36:45.051 clat (usec): min=14919, max=37591, avg=23788.16, stdev=1869.24 00:36:45.051 lat (usec): min=14961, max=37602, avg=23823.94, stdev=1869.48 00:36:45.051 clat percentiles (usec): 00:36:45.051 | 1.00th=[21365], 5.00th=[21890], 10.00th=[22414], 20.00th=[22676], 00:36:45.051 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23200], 00:36:45.051 | 70.00th=[23462], 80.00th=[25297], 90.00th=[26870], 95.00th=[27395], 00:36:45.051 | 99.00th=[28181], 99.50th=[28705], 99.90th=[37487], 99.95th=[37487], 00:36:45.051 | 99.99th=[37487] 00:36:45.051 bw ( KiB/s): min= 2427, max= 2944, per=4.16%, avg=2647.58, stdev=148.37, samples=19 00:36:45.051 iops : min= 606, max= 736, avg=661.84, stdev=37.16, samples=19 00:36:45.051 lat (msec) : 20=0.24%, 50=99.76% 00:36:45.051 cpu : usr=97.89%, sys=1.28%, ctx=110, majf=0, minf=9 00:36:45.051 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:45.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.051 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.051 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.051 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.051 filename2: (groupid=0, jobs=1): err= 0: pid=642124: Thu Dec 5 20:56:36 2024 00:36:45.051 read: IOPS=665, BW=2661KiB/s (2725kB/s)(26.0MiB/10006msec) 00:36:45.051 slat (nsec): min=4640, max=87898, avg=36077.81, stdev=14867.72 00:36:45.051 clat (usec): min=4708, max=48051, avg=23737.23, stdev=2418.27 00:36:45.051 lat (usec): min=4741, max=48066, avg=23773.30, stdev=2417.84 00:36:45.051 clat percentiles (usec): 00:36:45.051 | 1.00th=[21365], 5.00th=[21890], 10.00th=[22414], 20.00th=[22676], 00:36:45.051 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23200], 00:36:45.051 | 70.00th=[23462], 80.00th=[25297], 90.00th=[26870], 95.00th=[27657], 00:36:45.051 | 99.00th=[28181], 99.50th=[28705], 99.90th=[47973], 99.95th=[47973], 00:36:45.051 | 99.99th=[47973] 00:36:45.051 bw ( KiB/s): min= 2432, max= 2944, per=4.16%, avg=2647.58, stdev=135.28, samples=19 00:36:45.051 iops : min= 608, max= 736, avg=661.89, stdev=33.82, samples=19 00:36:45.051 lat (msec) : 10=0.48%, 20=0.27%, 50=99.25% 00:36:45.051 cpu : usr=98.84%, sys=0.77%, ctx=18, majf=0, minf=9 00:36:45.051 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:45.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.051 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.051 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.051 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.051 filename2: (groupid=0, jobs=1): err= 0: pid=642125: Thu Dec 5 20:56:36 2024 00:36:45.051 read: IOPS=666, BW=2666KiB/s (2730kB/s)(26.1MiB/10012msec) 00:36:45.051 slat (usec): min=7, max=100, avg=45.02, stdev=17.44 00:36:45.051 clat (usec): min=11724, max=30302, avg=23625.59, stdev=1912.93 00:36:45.051 lat (usec): min=11768, max=30332, avg=23670.61, stdev=1916.71 00:36:45.051 clat percentiles (usec): 00:36:45.051 | 1.00th=[21103], 5.00th=[21890], 10.00th=[22414], 20.00th=[22676], 00:36:45.051 | 30.00th=[22676], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:36:45.051 | 70.00th=[23462], 80.00th=[25035], 90.00th=[26870], 95.00th=[27395], 00:36:45.051 | 99.00th=[27919], 99.50th=[28181], 99.90th=[30016], 99.95th=[30016], 00:36:45.051 | 99.99th=[30278] 00:36:45.051 bw ( KiB/s): min= 2304, max= 2944, per=4.18%, avg=2662.65, stdev=169.44, samples=20 00:36:45.051 iops : min= 576, max= 736, avg=665.65, stdev=42.35, samples=20 00:36:45.051 lat (msec) : 20=0.96%, 50=99.04% 00:36:45.051 cpu : usr=98.91%, sys=0.65%, ctx=41, majf=0, minf=9 00:36:45.051 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:45.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.051 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.051 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.051 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.051 filename2: (groupid=0, jobs=1): err= 0: pid=642126: Thu Dec 5 20:56:36 2024 00:36:45.051 read: IOPS=666, BW=2666KiB/s (2730kB/s)(26.1MiB/10011msec) 00:36:45.051 slat (usec): min=4, max=102, avg=45.39, stdev=17.78 00:36:45.051 clat (usec): min=11782, max=30250, avg=23594.35, stdev=1934.30 00:36:45.051 lat (usec): min=11823, max=30298, avg=23639.74, stdev=1938.00 00:36:45.051 clat percentiles (usec): 00:36:45.051 | 1.00th=[21103], 5.00th=[21890], 10.00th=[22414], 20.00th=[22676], 00:36:45.051 | 30.00th=[22676], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:36:45.051 | 70.00th=[23462], 80.00th=[25035], 90.00th=[26870], 95.00th=[27395], 00:36:45.051 | 99.00th=[27919], 99.50th=[28181], 99.90th=[30016], 99.95th=[30016], 00:36:45.051 | 99.99th=[30278] 00:36:45.051 bw ( KiB/s): min= 2304, max= 2944, per=4.18%, avg=2662.40, stdev=169.20, samples=20 00:36:45.051 iops : min= 576, max= 736, avg=665.60, stdev=42.30, samples=20 00:36:45.051 lat (msec) : 20=0.96%, 50=99.04% 00:36:45.051 cpu : usr=98.97%, sys=0.65%, ctx=14, majf=0, minf=9 00:36:45.051 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:45.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.051 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.051 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.051 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.051 filename2: (groupid=0, jobs=1): err= 0: pid=642127: Thu Dec 5 20:56:36 2024 00:36:45.051 read: IOPS=669, BW=2676KiB/s (2740kB/s)(26.2MiB/10020msec) 00:36:45.051 slat (nsec): min=6566, max=61861, avg=12509.24, stdev=6515.51 00:36:45.051 clat (usec): min=3139, max=37567, avg=23822.90, stdev=2376.68 00:36:45.051 lat (usec): min=3146, max=37577, avg=23835.41, stdev=2376.71 00:36:45.051 clat percentiles (usec): 00:36:45.052 | 1.00th=[13042], 5.00th=[21890], 10.00th=[22676], 20.00th=[22938], 00:36:45.052 | 30.00th=[23200], 40.00th=[23200], 50.00th=[23200], 60.00th=[23200], 00:36:45.052 | 70.00th=[23725], 80.00th=[25297], 90.00th=[27132], 95.00th=[27657], 00:36:45.052 | 99.00th=[28181], 99.50th=[28443], 99.90th=[33817], 99.95th=[35390], 00:36:45.052 | 99.99th=[37487] 00:36:45.052 bw ( KiB/s): min= 2416, max= 3072, per=4.20%, avg=2675.20, stdev=165.92, samples=20 00:36:45.052 iops : min= 604, max= 768, avg=668.80, stdev=41.48, samples=20 00:36:45.052 lat (msec) : 4=0.24%, 20=1.43%, 50=98.33% 00:36:45.052 cpu : usr=98.70%, sys=0.92%, ctx=15, majf=0, minf=9 00:36:45.052 IO depths : 1=1.2%, 2=7.5%, 4=24.9%, 8=55.1%, 16=11.3%, 32=0.0%, >=64=0.0% 00:36:45.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.052 complete : 0=0.0%, 4=94.4%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.052 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.052 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.052 filename2: (groupid=0, jobs=1): err= 0: pid=642128: Thu Dec 5 20:56:36 2024 00:36:45.052 read: IOPS=663, BW=2654KiB/s (2718kB/s)(25.9MiB/10006msec) 00:36:45.052 slat (nsec): min=5253, max=89153, avg=34753.92, stdev=14768.79 00:36:45.052 clat (usec): min=14981, max=41667, avg=23821.91, stdev=1916.30 00:36:45.052 lat (usec): min=15028, max=41682, avg=23856.67, stdev=1915.77 00:36:45.052 clat percentiles (usec): 00:36:45.052 | 1.00th=[21365], 5.00th=[21890], 10.00th=[22414], 20.00th=[22676], 00:36:45.052 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23200], 00:36:45.052 | 70.00th=[23462], 80.00th=[25297], 90.00th=[26870], 95.00th=[27657], 00:36:45.052 | 99.00th=[28181], 99.50th=[28705], 99.90th=[39060], 99.95th=[39060], 00:36:45.052 | 99.99th=[41681] 00:36:45.052 bw ( KiB/s): min= 2427, max= 2944, per=4.16%, avg=2647.32, stdev=148.53, samples=19 00:36:45.052 iops : min= 606, max= 736, avg=661.79, stdev=37.20, samples=19 00:36:45.052 lat (msec) : 20=0.27%, 50=99.73% 00:36:45.052 cpu : usr=98.12%, sys=1.23%, ctx=71, majf=0, minf=9 00:36:45.052 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:45.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.052 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.052 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.052 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.052 filename2: (groupid=0, jobs=1): err= 0: pid=642129: Thu Dec 5 20:56:36 2024 00:36:45.052 read: IOPS=662, BW=2652KiB/s (2715kB/s)(26.0MiB/10051msec) 00:36:45.052 slat (usec): min=5, max=100, avg=44.52, stdev=18.79 00:36:45.052 clat (usec): min=13572, max=50858, avg=23638.33, stdev=2046.70 00:36:45.052 lat (usec): min=13580, max=50895, avg=23682.85, stdev=2050.30 00:36:45.052 clat percentiles (usec): 00:36:45.052 | 1.00th=[21365], 5.00th=[21890], 10.00th=[22414], 20.00th=[22676], 00:36:45.052 | 30.00th=[22676], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:36:45.052 | 70.00th=[23462], 80.00th=[25035], 90.00th=[26870], 95.00th=[27395], 00:36:45.052 | 99.00th=[27919], 99.50th=[28181], 99.90th=[50594], 99.95th=[50594], 00:36:45.052 | 99.99th=[51119] 00:36:45.052 bw ( KiB/s): min= 2432, max= 2944, per=4.18%, avg=2662.40, stdev=153.15, samples=20 00:36:45.052 iops : min= 608, max= 736, avg=665.60, stdev=38.29, samples=20 00:36:45.052 lat (msec) : 20=0.75%, 50=99.14%, 100=0.11% 00:36:45.052 cpu : usr=99.15%, sys=0.47%, ctx=33, majf=0, minf=9 00:36:45.052 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:45.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.052 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.052 issued rwts: total=6663,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.052 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.052 00:36:45.052 Run status group 0 (all jobs): 00:36:45.052 READ: bw=62.1MiB/s (65.2MB/s), 2652KiB/s-2685KiB/s (2715kB/s-2749kB/s), io=625MiB (655MB), run=10001-10051msec 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:45.052 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.053 bdev_null0 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.053 [2024-12-05 20:56:37.232679] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.053 bdev_null1 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:45.053 { 00:36:45.053 "params": { 00:36:45.053 "name": "Nvme$subsystem", 00:36:45.053 "trtype": "$TEST_TRANSPORT", 00:36:45.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:45.053 "adrfam": "ipv4", 00:36:45.053 "trsvcid": "$NVMF_PORT", 00:36:45.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:45.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:45.053 "hdgst": ${hdgst:-false}, 00:36:45.053 "ddgst": ${ddgst:-false} 00:36:45.053 }, 00:36:45.053 "method": "bdev_nvme_attach_controller" 00:36:45.053 } 00:36:45.053 EOF 00:36:45.053 )") 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:45.053 { 00:36:45.053 "params": { 00:36:45.053 "name": "Nvme$subsystem", 00:36:45.053 "trtype": "$TEST_TRANSPORT", 00:36:45.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:45.053 "adrfam": "ipv4", 00:36:45.053 "trsvcid": "$NVMF_PORT", 00:36:45.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:45.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:45.053 "hdgst": ${hdgst:-false}, 00:36:45.053 "ddgst": ${ddgst:-false} 00:36:45.053 }, 00:36:45.053 "method": "bdev_nvme_attach_controller" 00:36:45.053 } 00:36:45.053 EOF 00:36:45.053 )") 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:45.053 20:56:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:45.053 "params": { 00:36:45.053 "name": "Nvme0", 00:36:45.053 "trtype": "tcp", 00:36:45.053 "traddr": "10.0.0.2", 00:36:45.053 "adrfam": "ipv4", 00:36:45.053 "trsvcid": "4420", 00:36:45.053 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:45.053 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:45.053 "hdgst": false, 00:36:45.053 "ddgst": false 00:36:45.054 }, 00:36:45.054 "method": "bdev_nvme_attach_controller" 00:36:45.054 },{ 00:36:45.054 "params": { 00:36:45.054 "name": "Nvme1", 00:36:45.054 "trtype": "tcp", 00:36:45.054 "traddr": "10.0.0.2", 00:36:45.054 "adrfam": "ipv4", 00:36:45.054 "trsvcid": "4420", 00:36:45.054 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:45.054 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:45.054 "hdgst": false, 00:36:45.054 "ddgst": false 00:36:45.054 }, 00:36:45.054 "method": "bdev_nvme_attach_controller" 00:36:45.054 }' 00:36:45.054 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:45.054 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:45.054 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:45.054 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:45.054 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:45.054 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:45.054 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:45.054 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:45.054 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:45.054 20:56:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:45.054 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:45.054 ... 00:36:45.054 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:45.054 ... 00:36:45.054 fio-3.35 00:36:45.054 Starting 4 threads 00:36:50.316 00:36:50.316 filename0: (groupid=0, jobs=1): err= 0: pid=644330: Thu Dec 5 20:56:43 2024 00:36:50.316 read: IOPS=3050, BW=23.8MiB/s (25.0MB/s)(119MiB/5003msec) 00:36:50.316 slat (nsec): min=5565, max=32271, avg=7956.36, stdev=2773.10 00:36:50.316 clat (usec): min=860, max=5214, avg=2599.83, stdev=367.02 00:36:50.316 lat (usec): min=873, max=5226, avg=2607.78, stdev=366.76 00:36:50.316 clat percentiles (usec): 00:36:50.316 | 1.00th=[ 1729], 5.00th=[ 2040], 10.00th=[ 2147], 20.00th=[ 2311], 00:36:50.316 | 30.00th=[ 2442], 40.00th=[ 2540], 50.00th=[ 2704], 60.00th=[ 2737], 00:36:50.316 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2966], 95.00th=[ 3163], 00:36:50.316 | 99.00th=[ 3654], 99.50th=[ 3818], 99.90th=[ 4359], 99.95th=[ 4555], 00:36:50.316 | 99.99th=[ 5211] 00:36:50.316 bw ( KiB/s): min=23184, max=25616, per=26.44%, avg=24412.80, stdev=814.23, samples=10 00:36:50.316 iops : min= 2898, max= 3202, avg=3051.60, stdev=101.78, samples=10 00:36:50.316 lat (usec) : 1000=0.20% 00:36:50.316 lat (msec) : 2=4.00%, 4=95.52%, 10=0.29% 00:36:50.316 cpu : usr=95.92%, sys=3.76%, ctx=7, majf=0, minf=9 00:36:50.316 IO depths : 1=0.1%, 2=2.8%, 4=66.9%, 8=30.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:50.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:50.316 complete : 0=0.0%, 4=94.4%, 8=5.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:50.316 issued rwts: total=15264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:50.316 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:50.316 filename0: (groupid=0, jobs=1): err= 0: pid=644331: Thu Dec 5 20:56:43 2024 00:36:50.316 read: IOPS=2818, BW=22.0MiB/s (23.1MB/s)(110MiB/5001msec) 00:36:50.317 slat (nsec): min=5572, max=36172, avg=7801.91, stdev=2711.56 00:36:50.317 clat (usec): min=843, max=6327, avg=2815.66, stdev=358.48 00:36:50.317 lat (usec): min=854, max=6333, avg=2823.46, stdev=358.39 00:36:50.317 clat percentiles (usec): 00:36:50.317 | 1.00th=[ 1909], 5.00th=[ 2245], 10.00th=[ 2474], 20.00th=[ 2671], 00:36:50.317 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2737], 60.00th=[ 2769], 00:36:50.317 | 70.00th=[ 2868], 80.00th=[ 2999], 90.00th=[ 3261], 95.00th=[ 3425], 00:36:50.317 | 99.00th=[ 3949], 99.50th=[ 4228], 99.90th=[ 4686], 99.95th=[ 4752], 00:36:50.317 | 99.99th=[ 6325] 00:36:50.317 bw ( KiB/s): min=21600, max=23056, per=24.44%, avg=22573.89, stdev=454.05, samples=9 00:36:50.317 iops : min= 2700, max= 2882, avg=2821.67, stdev=56.71, samples=9 00:36:50.317 lat (usec) : 1000=0.01% 00:36:50.317 lat (msec) : 2=1.64%, 4=97.52%, 10=0.84% 00:36:50.317 cpu : usr=96.20%, sys=3.48%, ctx=5, majf=0, minf=9 00:36:50.317 IO depths : 1=0.1%, 2=1.1%, 4=71.9%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:50.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:50.317 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:50.317 issued rwts: total=14094,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:50.317 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:50.317 filename1: (groupid=0, jobs=1): err= 0: pid=644332: Thu Dec 5 20:56:43 2024 00:36:50.317 read: IOPS=2788, BW=21.8MiB/s (22.8MB/s)(109MiB/5002msec) 00:36:50.317 slat (nsec): min=5618, max=35236, avg=8086.43, stdev=2944.31 00:36:50.317 clat (usec): min=797, max=5799, avg=2845.80, stdev=360.23 00:36:50.317 lat (usec): min=803, max=5806, avg=2853.89, stdev=360.07 00:36:50.317 clat percentiles (usec): 00:36:50.317 | 1.00th=[ 1991], 5.00th=[ 2343], 10.00th=[ 2540], 20.00th=[ 2671], 00:36:50.317 | 30.00th=[ 2737], 40.00th=[ 2737], 50.00th=[ 2769], 60.00th=[ 2769], 00:36:50.317 | 70.00th=[ 2900], 80.00th=[ 3032], 90.00th=[ 3294], 95.00th=[ 3458], 00:36:50.317 | 99.00th=[ 4080], 99.50th=[ 4490], 99.90th=[ 4817], 99.95th=[ 5080], 00:36:50.317 | 99.99th=[ 5800] 00:36:50.317 bw ( KiB/s): min=21616, max=22960, per=24.23%, avg=22378.67, stdev=492.57, samples=9 00:36:50.317 iops : min= 2702, max= 2870, avg=2797.33, stdev=61.57, samples=9 00:36:50.317 lat (usec) : 1000=0.03% 00:36:50.317 lat (msec) : 2=1.04%, 4=97.70%, 10=1.23% 00:36:50.317 cpu : usr=95.44%, sys=4.26%, ctx=7, majf=0, minf=9 00:36:50.317 IO depths : 1=0.1%, 2=1.1%, 4=71.6%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:50.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:50.317 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:50.317 issued rwts: total=13948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:50.317 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:50.317 filename1: (groupid=0, jobs=1): err= 0: pid=644333: Thu Dec 5 20:56:43 2024 00:36:50.317 read: IOPS=2887, BW=22.6MiB/s (23.7MB/s)(113MiB/5002msec) 00:36:50.317 slat (usec): min=5, max=165, avg= 7.98, stdev= 3.15 00:36:50.317 clat (usec): min=1125, max=4938, avg=2747.32, stdev=349.86 00:36:50.317 lat (usec): min=1131, max=4950, avg=2755.30, stdev=349.70 00:36:50.317 clat percentiles (usec): 00:36:50.317 | 1.00th=[ 1909], 5.00th=[ 2180], 10.00th=[ 2343], 20.00th=[ 2507], 00:36:50.317 | 30.00th=[ 2671], 40.00th=[ 2737], 50.00th=[ 2737], 60.00th=[ 2769], 00:36:50.317 | 70.00th=[ 2769], 80.00th=[ 2966], 90.00th=[ 3163], 95.00th=[ 3359], 00:36:50.317 | 99.00th=[ 3851], 99.50th=[ 4178], 99.90th=[ 4621], 99.95th=[ 4817], 00:36:50.317 | 99.99th=[ 4948] 00:36:50.317 bw ( KiB/s): min=22432, max=23776, per=25.01%, avg=23096.89, stdev=357.87, samples=9 00:36:50.317 iops : min= 2804, max= 2972, avg=2887.11, stdev=44.73, samples=9 00:36:50.317 lat (msec) : 2=1.66%, 4=97.62%, 10=0.72% 00:36:50.317 cpu : usr=96.24%, sys=3.44%, ctx=8, majf=0, minf=9 00:36:50.317 IO depths : 1=0.1%, 2=1.7%, 4=69.9%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:50.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:50.317 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:50.317 issued rwts: total=14445,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:50.317 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:50.317 00:36:50.317 Run status group 0 (all jobs): 00:36:50.317 READ: bw=90.2MiB/s (94.6MB/s), 21.8MiB/s-23.8MiB/s (22.8MB/s-25.0MB/s), io=451MiB (473MB), run=5001-5003msec 00:36:50.317 20:56:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:50.317 20:56:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:50.317 20:56:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:50.317 20:56:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:50.317 20:56:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:50.317 20:56:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:50.317 20:56:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.317 20:56:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:50.317 20:56:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.317 20:56:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:50.317 20:56:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.317 20:56:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:50.317 20:56:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.317 20:56:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:50.317 20:56:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:50.317 20:56:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:50.317 20:56:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:50.317 20:56:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.317 20:56:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:50.317 20:56:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.317 20:56:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:50.317 20:56:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.317 20:56:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:50.317 20:56:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.317 00:36:50.317 real 0m24.478s 00:36:50.317 user 4m59.452s 00:36:50.317 sys 0m4.993s 00:36:50.317 20:56:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:50.317 20:56:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:50.317 ************************************ 00:36:50.317 END TEST fio_dif_rand_params 00:36:50.317 ************************************ 00:36:50.575 20:56:43 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:50.575 20:56:43 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:50.575 20:56:43 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:50.575 20:56:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:50.576 ************************************ 00:36:50.576 START TEST fio_dif_digest 00:36:50.576 ************************************ 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:50.576 bdev_null0 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:50.576 [2024-12-05 20:56:43.862304] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:50.576 { 00:36:50.576 "params": { 00:36:50.576 "name": "Nvme$subsystem", 00:36:50.576 "trtype": "$TEST_TRANSPORT", 00:36:50.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:50.576 "adrfam": "ipv4", 00:36:50.576 "trsvcid": "$NVMF_PORT", 00:36:50.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:50.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:50.576 "hdgst": ${hdgst:-false}, 00:36:50.576 "ddgst": ${ddgst:-false} 00:36:50.576 }, 00:36:50.576 "method": "bdev_nvme_attach_controller" 00:36:50.576 } 00:36:50.576 EOF 00:36:50.576 )") 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:50.576 "params": { 00:36:50.576 "name": "Nvme0", 00:36:50.576 "trtype": "tcp", 00:36:50.576 "traddr": "10.0.0.2", 00:36:50.576 "adrfam": "ipv4", 00:36:50.576 "trsvcid": "4420", 00:36:50.576 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:50.576 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:50.576 "hdgst": true, 00:36:50.576 "ddgst": true 00:36:50.576 }, 00:36:50.576 "method": "bdev_nvme_attach_controller" 00:36:50.576 }' 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:50.576 20:56:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:50.833 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:50.833 ... 00:36:50.833 fio-3.35 00:36:50.833 Starting 3 threads 00:37:03.030 00:37:03.030 filename0: (groupid=0, jobs=1): err= 0: pid=645536: Thu Dec 5 20:56:54 2024 00:37:03.030 read: IOPS=316, BW=39.6MiB/s (41.5MB/s)(398MiB/10047msec) 00:37:03.030 slat (nsec): min=5782, max=42001, avg=16759.52, stdev=7216.79 00:37:03.030 clat (usec): min=7302, max=50880, avg=9434.37, stdev=1192.17 00:37:03.030 lat (usec): min=7314, max=50892, avg=9451.13, stdev=1192.01 00:37:03.030 clat percentiles (usec): 00:37:03.030 | 1.00th=[ 7898], 5.00th=[ 8356], 10.00th=[ 8586], 20.00th=[ 8848], 00:37:03.030 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9634], 00:37:03.030 | 70.00th=[ 9765], 80.00th=[ 9896], 90.00th=[10159], 95.00th=[10421], 00:37:03.030 | 99.00th=[10945], 99.50th=[11338], 99.90th=[11994], 99.95th=[47449], 00:37:03.030 | 99.99th=[51119] 00:37:03.030 bw ( KiB/s): min=39424, max=41472, per=35.83%, avg=40729.60, stdev=537.63, samples=20 00:37:03.030 iops : min= 308, max= 324, avg=318.20, stdev= 4.20, samples=20 00:37:03.030 lat (msec) : 10=83.10%, 20=16.83%, 50=0.03%, 100=0.03% 00:37:03.030 cpu : usr=95.92%, sys=3.76%, ctx=27, majf=0, minf=91 00:37:03.030 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:03.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.030 issued rwts: total=3184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.030 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:03.030 filename0: (groupid=0, jobs=1): err= 0: pid=645537: Thu Dec 5 20:56:54 2024 00:37:03.030 read: IOPS=290, BW=36.3MiB/s (38.1MB/s)(365MiB/10045msec) 00:37:03.030 slat (nsec): min=5992, max=45751, avg=16781.98, stdev=7005.07 00:37:03.030 clat (usec): min=7517, max=47505, avg=10283.86, stdev=1153.27 00:37:03.030 lat (usec): min=7534, max=47525, avg=10300.65, stdev=1153.19 00:37:03.030 clat percentiles (usec): 00:37:03.030 | 1.00th=[ 8717], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9765], 00:37:03.030 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:37:03.030 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11076], 95.00th=[11338], 00:37:03.030 | 99.00th=[11863], 99.50th=[12125], 99.90th=[12780], 99.95th=[44827], 00:37:03.030 | 99.99th=[47449] 00:37:03.030 bw ( KiB/s): min=36096, max=38144, per=32.87%, avg=37363.20, stdev=473.32, samples=20 00:37:03.030 iops : min= 282, max= 298, avg=291.90, stdev= 3.70, samples=20 00:37:03.030 lat (msec) : 10=34.37%, 20=65.56%, 50=0.07% 00:37:03.030 cpu : usr=96.27%, sys=3.42%, ctx=16, majf=0, minf=83 00:37:03.030 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:03.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.030 issued rwts: total=2921,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.030 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:03.030 filename0: (groupid=0, jobs=1): err= 0: pid=645538: Thu Dec 5 20:56:54 2024 00:37:03.030 read: IOPS=281, BW=35.2MiB/s (36.9MB/s)(352MiB/10003msec) 00:37:03.030 slat (nsec): min=5868, max=63404, avg=19044.58, stdev=5338.50 00:37:03.030 clat (usec): min=5581, max=13243, avg=10631.43, stdev=723.72 00:37:03.030 lat (usec): min=5599, max=13266, avg=10650.47, stdev=724.05 00:37:03.030 clat percentiles (usec): 00:37:03.030 | 1.00th=[ 9110], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10028], 00:37:03.030 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:37:03.030 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:37:03.030 | 99.00th=[12518], 99.50th=[12780], 99.90th=[13042], 99.95th=[13173], 00:37:03.030 | 99.99th=[13304] 00:37:03.030 bw ( KiB/s): min=35328, max=36864, per=31.67%, avg=36001.68, stdev=428.46, samples=19 00:37:03.030 iops : min= 276, max= 288, avg=281.26, stdev= 3.35, samples=19 00:37:03.030 lat (msec) : 10=17.18%, 20=82.82% 00:37:03.030 cpu : usr=96.21%, sys=3.46%, ctx=14, majf=0, minf=118 00:37:03.030 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:03.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.030 issued rwts: total=2817,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.030 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:03.030 00:37:03.030 Run status group 0 (all jobs): 00:37:03.030 READ: bw=111MiB/s (116MB/s), 35.2MiB/s-39.6MiB/s (36.9MB/s-41.5MB/s), io=1115MiB (1169MB), run=10003-10047msec 00:37:03.030 20:56:55 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:03.030 20:56:55 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:03.030 20:56:55 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:03.030 20:56:55 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:03.030 20:56:55 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:03.030 20:56:55 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:03.030 20:56:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.030 20:56:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:03.030 20:56:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.030 20:56:55 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:03.030 20:56:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.030 20:56:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:03.030 20:56:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.030 00:37:03.030 real 0m11.232s 00:37:03.030 user 0m38.450s 00:37:03.030 sys 0m1.380s 00:37:03.030 20:56:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:03.030 20:56:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:03.030 ************************************ 00:37:03.030 END TEST fio_dif_digest 00:37:03.030 ************************************ 00:37:03.030 20:56:55 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:03.030 20:56:55 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:03.030 20:56:55 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:03.030 20:56:55 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:37:03.030 20:56:55 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:03.030 20:56:55 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:37:03.030 20:56:55 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:03.030 20:56:55 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:03.030 rmmod nvme_tcp 00:37:03.030 rmmod nvme_fabrics 00:37:03.030 rmmod nvme_keyring 00:37:03.030 20:56:55 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:03.030 20:56:55 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:37:03.030 20:56:55 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:37:03.030 20:56:55 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 636241 ']' 00:37:03.030 20:56:55 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 636241 00:37:03.030 20:56:55 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 636241 ']' 00:37:03.030 20:56:55 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 636241 00:37:03.030 20:56:55 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:37:03.030 20:56:55 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:03.030 20:56:55 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 636241 00:37:03.030 20:56:55 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:03.030 20:56:55 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:03.030 20:56:55 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 636241' 00:37:03.030 killing process with pid 636241 00:37:03.030 20:56:55 nvmf_dif -- common/autotest_common.sh@973 -- # kill 636241 00:37:03.030 20:56:55 nvmf_dif -- common/autotest_common.sh@978 -- # wait 636241 00:37:03.030 20:56:55 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:03.030 20:56:55 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:04.935 Waiting for block devices as requested 00:37:04.935 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:37:04.935 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:04.935 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:05.194 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:05.194 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:05.194 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:05.453 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:05.453 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:05.453 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:05.453 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:05.711 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:05.711 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:05.711 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:05.971 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:05.971 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:05.971 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:05.971 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:06.229 20:56:59 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:06.229 20:56:59 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:06.229 20:56:59 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:37:06.229 20:56:59 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:37:06.229 20:56:59 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:06.229 20:56:59 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:37:06.229 20:56:59 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:06.229 20:56:59 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:06.229 20:56:59 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:06.229 20:56:59 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:06.229 20:56:59 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:08.134 20:57:01 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:08.393 00:37:08.393 real 1m15.022s 00:37:08.393 user 7m25.520s 00:37:08.393 sys 0m20.351s 00:37:08.393 20:57:01 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:08.393 20:57:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:08.393 ************************************ 00:37:08.393 END TEST nvmf_dif 00:37:08.393 ************************************ 00:37:08.393 20:57:01 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:08.393 20:57:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:08.393 20:57:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:08.393 20:57:01 -- common/autotest_common.sh@10 -- # set +x 00:37:08.393 ************************************ 00:37:08.393 START TEST nvmf_abort_qd_sizes 00:37:08.393 ************************************ 00:37:08.393 20:57:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:08.393 * Looking for test storage... 00:37:08.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:08.393 20:57:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:08.393 20:57:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:37:08.393 20:57:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:08.393 20:57:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:08.393 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:08.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:08.394 --rc genhtml_branch_coverage=1 00:37:08.394 --rc genhtml_function_coverage=1 00:37:08.394 --rc genhtml_legend=1 00:37:08.394 --rc geninfo_all_blocks=1 00:37:08.394 --rc geninfo_unexecuted_blocks=1 00:37:08.394 00:37:08.394 ' 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:08.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:08.394 --rc genhtml_branch_coverage=1 00:37:08.394 --rc genhtml_function_coverage=1 00:37:08.394 --rc genhtml_legend=1 00:37:08.394 --rc geninfo_all_blocks=1 00:37:08.394 --rc geninfo_unexecuted_blocks=1 00:37:08.394 00:37:08.394 ' 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:08.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:08.394 --rc genhtml_branch_coverage=1 00:37:08.394 --rc genhtml_function_coverage=1 00:37:08.394 --rc genhtml_legend=1 00:37:08.394 --rc geninfo_all_blocks=1 00:37:08.394 --rc geninfo_unexecuted_blocks=1 00:37:08.394 00:37:08.394 ' 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:08.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:08.394 --rc genhtml_branch_coverage=1 00:37:08.394 --rc genhtml_function_coverage=1 00:37:08.394 --rc genhtml_legend=1 00:37:08.394 --rc geninfo_all_blocks=1 00:37:08.394 --rc geninfo_unexecuted_blocks=1 00:37:08.394 00:37:08.394 ' 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:08.394 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:08.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:37:08.653 20:57:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:15.216 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:15.216 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:15.216 Found net devices under 0000:af:00.0: cvl_0_0 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:15.216 Found net devices under 0000:af:00.1: cvl_0_1 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:15.216 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:15.217 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:15.217 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:15.217 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:15.217 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:15.217 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:15.217 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:15.217 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:15.217 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:15.217 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:15.217 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:15.217 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:15.217 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:15.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:15.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:37:15.217 00:37:15.217 --- 10.0.0.2 ping statistics --- 00:37:15.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:15.217 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:37:15.217 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:15.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:15.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:37:15.217 00:37:15.217 --- 10.0.0.1 ping statistics --- 00:37:15.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:15.217 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:37:15.217 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:15.217 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:37:15.217 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:15.217 20:57:07 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:17.118 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:17.118 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:17.118 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:17.118 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:17.375 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:17.375 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:17.375 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:17.375 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:17.375 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:17.375 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:17.375 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:17.375 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:17.375 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:17.375 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:17.375 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:17.375 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:18.306 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:37:18.306 20:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:18.306 20:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:18.306 20:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:18.306 20:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:18.306 20:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:18.306 20:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:18.306 20:57:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:18.306 20:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:18.306 20:57:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:18.306 20:57:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:18.306 20:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=654404 00:37:18.306 20:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 654404 00:37:18.307 20:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:18.307 20:57:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 654404 ']' 00:37:18.307 20:57:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:18.307 20:57:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:18.307 20:57:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:18.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:18.307 20:57:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:18.307 20:57:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:18.564 [2024-12-05 20:57:11.786321] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:37:18.564 [2024-12-05 20:57:11.786366] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:18.564 [2024-12-05 20:57:11.865479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:18.564 [2024-12-05 20:57:11.907582] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:18.564 [2024-12-05 20:57:11.907616] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:18.564 [2024-12-05 20:57:11.907623] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:18.564 [2024-12-05 20:57:11.907629] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:18.564 [2024-12-05 20:57:11.907633] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:18.564 [2024-12-05 20:57:11.909206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:18.564 [2024-12-05 20:57:11.909321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:18.564 [2024-12-05 20:57:11.909438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:18.564 [2024-12-05 20:57:11.909438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:19.495 20:57:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:19.495 20:57:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:37:19.495 20:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:19.495 20:57:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:19.495 20:57:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:19.495 20:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:19.495 20:57:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:19.495 20:57:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:19.495 20:57:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:19.495 20:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:37:19.495 20:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:37:19.495 20:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:86:00.0 ]] 00:37:19.495 20:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:19.495 20:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:19.495 20:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:86:00.0 ]] 00:37:19.495 20:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:19.495 20:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:19.495 20:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:19.495 20:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:37:19.495 20:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:86:00.0 00:37:19.495 20:57:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:37:19.495 20:57:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:86:00.0 00:37:19.496 20:57:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:19.496 20:57:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:19.496 20:57:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:19.496 20:57:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:19.496 ************************************ 00:37:19.496 START TEST spdk_target_abort 00:37:19.496 ************************************ 00:37:19.496 20:57:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:37:19.496 20:57:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:19.496 20:57:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:86:00.0 -b spdk_target 00:37:19.496 20:57:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.496 20:57:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:22.775 spdk_targetn1 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:22.775 [2024-12-05 20:57:15.519066] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:22.775 [2024-12-05 20:57:15.563383] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:22.775 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:22.776 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:22.776 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:22.776 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:22.776 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:22.776 20:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:26.053 Initializing NVMe Controllers 00:37:26.053 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:26.053 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:26.053 Initialization complete. Launching workers. 00:37:26.053 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15287, failed: 0 00:37:26.053 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1300, failed to submit 13987 00:37:26.053 success 706, unsuccessful 594, failed 0 00:37:26.053 20:57:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:26.053 20:57:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:29.333 Initializing NVMe Controllers 00:37:29.333 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:29.333 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:29.333 Initialization complete. Launching workers. 00:37:29.333 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8922, failed: 0 00:37:29.333 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1258, failed to submit 7664 00:37:29.333 success 312, unsuccessful 946, failed 0 00:37:29.333 20:57:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:29.333 20:57:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:32.604 Initializing NVMe Controllers 00:37:32.604 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:32.604 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:32.604 Initialization complete. Launching workers. 00:37:32.604 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41010, failed: 0 00:37:32.604 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2840, failed to submit 38170 00:37:32.604 success 594, unsuccessful 2246, failed 0 00:37:32.604 20:57:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:37:32.604 20:57:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.604 20:57:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:32.604 20:57:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.604 20:57:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:32.604 20:57:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.604 20:57:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:33.536 20:57:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.536 20:57:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 654404 00:37:33.536 20:57:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 654404 ']' 00:37:33.536 20:57:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 654404 00:37:33.536 20:57:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:37:33.536 20:57:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:33.536 20:57:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 654404 00:37:33.536 20:57:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:33.536 20:57:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:33.536 20:57:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 654404' 00:37:33.536 killing process with pid 654404 00:37:33.536 20:57:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 654404 00:37:33.536 20:57:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 654404 00:37:33.536 00:37:33.536 real 0m14.264s 00:37:33.536 user 0m56.633s 00:37:33.536 sys 0m2.673s 00:37:33.536 20:57:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:33.536 20:57:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:33.536 ************************************ 00:37:33.536 END TEST spdk_target_abort 00:37:33.536 ************************************ 00:37:33.794 20:57:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:33.794 20:57:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:33.794 20:57:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:33.794 20:57:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:33.794 ************************************ 00:37:33.794 START TEST kernel_target_abort 00:37:33.794 ************************************ 00:37:33.795 20:57:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:37:33.795 20:57:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:33.795 20:57:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:37:33.795 20:57:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:33.795 20:57:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:33.795 20:57:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:33.795 20:57:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:33.795 20:57:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:33.795 20:57:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:33.795 20:57:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:33.795 20:57:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:33.795 20:57:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:33.795 20:57:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:33.795 20:57:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:33.795 20:57:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:37:33.795 20:57:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:33.795 20:57:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:33.795 20:57:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:33.795 20:57:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:37:33.795 20:57:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:37:33.795 20:57:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:37:33.795 20:57:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:33.795 20:57:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:36.333 Waiting for block devices as requested 00:37:36.592 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:37:36.592 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:36.592 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:36.851 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:36.851 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:36.851 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:37.111 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:37.111 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:37.111 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:37.111 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:37.369 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:37.369 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:37.369 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:37.627 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:37.627 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:37.627 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:37.627 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:37.886 No valid GPT data, bailing 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:37:37.886 00:37:37.886 Discovery Log Number of Records 2, Generation counter 2 00:37:37.886 =====Discovery Log Entry 0====== 00:37:37.886 trtype: tcp 00:37:37.886 adrfam: ipv4 00:37:37.886 subtype: current discovery subsystem 00:37:37.886 treq: not specified, sq flow control disable supported 00:37:37.886 portid: 1 00:37:37.886 trsvcid: 4420 00:37:37.886 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:37.886 traddr: 10.0.0.1 00:37:37.886 eflags: none 00:37:37.886 sectype: none 00:37:37.886 =====Discovery Log Entry 1====== 00:37:37.886 trtype: tcp 00:37:37.886 adrfam: ipv4 00:37:37.886 subtype: nvme subsystem 00:37:37.886 treq: not specified, sq flow control disable supported 00:37:37.886 portid: 1 00:37:37.886 trsvcid: 4420 00:37:37.886 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:37.886 traddr: 10.0.0.1 00:37:37.886 eflags: none 00:37:37.886 sectype: none 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:37.886 20:57:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:41.172 Initializing NVMe Controllers 00:37:41.172 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:41.172 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:41.172 Initialization complete. Launching workers. 00:37:41.172 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 84674, failed: 0 00:37:41.172 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 84674, failed to submit 0 00:37:41.172 success 0, unsuccessful 84674, failed 0 00:37:41.172 20:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:41.172 20:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:44.599 Initializing NVMe Controllers 00:37:44.599 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:44.599 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:44.599 Initialization complete. Launching workers. 00:37:44.599 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 152829, failed: 0 00:37:44.599 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29170, failed to submit 123659 00:37:44.599 success 0, unsuccessful 29170, failed 0 00:37:44.599 20:57:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:44.599 20:57:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:47.884 Initializing NVMe Controllers 00:37:47.884 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:47.884 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:47.884 Initialization complete. Launching workers. 00:37:47.884 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 135954, failed: 0 00:37:47.884 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34038, failed to submit 101916 00:37:47.884 success 0, unsuccessful 34038, failed 0 00:37:47.884 20:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:47.884 20:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:47.884 20:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:37:47.884 20:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:47.884 20:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:47.884 20:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:47.884 20:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:47.884 20:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:37:47.884 20:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:37:47.884 20:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:50.421 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:50.421 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:50.421 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:50.421 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:50.421 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:50.421 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:50.421 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:50.421 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:50.421 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:50.421 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:50.421 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:50.421 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:50.421 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:50.421 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:50.421 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:50.421 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:51.356 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:37:51.356 00:37:51.356 real 0m17.615s 00:37:51.356 user 0m8.491s 00:37:51.356 sys 0m5.397s 00:37:51.356 20:57:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:51.356 20:57:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:51.356 ************************************ 00:37:51.356 END TEST kernel_target_abort 00:37:51.356 ************************************ 00:37:51.356 20:57:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:51.356 20:57:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:51.356 20:57:44 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:51.356 20:57:44 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:37:51.356 20:57:44 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:51.356 20:57:44 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:37:51.356 20:57:44 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:51.356 20:57:44 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:51.356 rmmod nvme_tcp 00:37:51.356 rmmod nvme_fabrics 00:37:51.356 rmmod nvme_keyring 00:37:51.356 20:57:44 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:51.356 20:57:44 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:37:51.356 20:57:44 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:37:51.356 20:57:44 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 654404 ']' 00:37:51.356 20:57:44 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 654404 00:37:51.356 20:57:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 654404 ']' 00:37:51.356 20:57:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 654404 00:37:51.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (654404) - No such process 00:37:51.356 20:57:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 654404 is not found' 00:37:51.356 Process with pid 654404 is not found 00:37:51.356 20:57:44 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:51.356 20:57:44 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:54.637 Waiting for block devices as requested 00:37:54.637 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:37:54.637 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:54.637 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:54.637 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:54.637 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:54.637 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:54.637 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:54.895 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:54.895 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:54.895 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:55.153 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:55.153 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:55.153 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:55.153 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:55.412 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:55.412 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:55.412 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:55.671 20:57:48 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:55.671 20:57:48 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:55.671 20:57:48 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:37:55.671 20:57:48 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:37:55.671 20:57:48 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:55.671 20:57:48 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:37:55.671 20:57:48 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:55.671 20:57:48 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:55.671 20:57:48 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:55.671 20:57:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:55.671 20:57:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:57.576 20:57:50 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:57.576 00:37:57.576 real 0m49.300s 00:37:57.576 user 1m9.690s 00:37:57.576 sys 0m16.812s 00:37:57.576 20:57:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:57.576 20:57:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:57.576 ************************************ 00:37:57.576 END TEST nvmf_abort_qd_sizes 00:37:57.576 ************************************ 00:37:57.576 20:57:50 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:57.576 20:57:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:57.576 20:57:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:57.576 20:57:50 -- common/autotest_common.sh@10 -- # set +x 00:37:57.836 ************************************ 00:37:57.836 START TEST keyring_file 00:37:57.836 ************************************ 00:37:57.836 20:57:51 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:57.836 * Looking for test storage... 00:37:57.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:57.836 20:57:51 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:57.836 20:57:51 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:37:57.836 20:57:51 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:57.836 20:57:51 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@345 -- # : 1 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@353 -- # local d=1 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@355 -- # echo 1 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@353 -- # local d=2 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@355 -- # echo 2 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@368 -- # return 0 00:37:57.836 20:57:51 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:57.836 20:57:51 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:57.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:57.836 --rc genhtml_branch_coverage=1 00:37:57.836 --rc genhtml_function_coverage=1 00:37:57.836 --rc genhtml_legend=1 00:37:57.836 --rc geninfo_all_blocks=1 00:37:57.836 --rc geninfo_unexecuted_blocks=1 00:37:57.836 00:37:57.836 ' 00:37:57.836 20:57:51 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:57.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:57.836 --rc genhtml_branch_coverage=1 00:37:57.836 --rc genhtml_function_coverage=1 00:37:57.836 --rc genhtml_legend=1 00:37:57.836 --rc geninfo_all_blocks=1 00:37:57.836 --rc geninfo_unexecuted_blocks=1 00:37:57.836 00:37:57.836 ' 00:37:57.836 20:57:51 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:57.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:57.836 --rc genhtml_branch_coverage=1 00:37:57.836 --rc genhtml_function_coverage=1 00:37:57.836 --rc genhtml_legend=1 00:37:57.836 --rc geninfo_all_blocks=1 00:37:57.836 --rc geninfo_unexecuted_blocks=1 00:37:57.836 00:37:57.836 ' 00:37:57.836 20:57:51 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:57.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:57.836 --rc genhtml_branch_coverage=1 00:37:57.836 --rc genhtml_function_coverage=1 00:37:57.836 --rc genhtml_legend=1 00:37:57.836 --rc geninfo_all_blocks=1 00:37:57.836 --rc geninfo_unexecuted_blocks=1 00:37:57.836 00:37:57.836 ' 00:37:57.836 20:57:51 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:57.836 20:57:51 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:57.836 20:57:51 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:57.836 20:57:51 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:57.836 20:57:51 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:57.836 20:57:51 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:57.836 20:57:51 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:57.836 20:57:51 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:57.836 20:57:51 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:57.836 20:57:51 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:57.836 20:57:51 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:57.836 20:57:51 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:57.836 20:57:51 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:57.836 20:57:51 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:37:57.836 20:57:51 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:37:57.836 20:57:51 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:57.836 20:57:51 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:57.836 20:57:51 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:57.836 20:57:51 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:57.836 20:57:51 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:57.836 20:57:51 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:57.836 20:57:51 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.836 20:57:51 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.836 20:57:51 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.836 20:57:51 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:57.836 20:57:51 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.836 20:57:51 keyring_file -- nvmf/common.sh@51 -- # : 0 00:37:57.836 20:57:51 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:57.836 20:57:51 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:57.836 20:57:51 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:57.836 20:57:51 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:57.836 20:57:51 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:57.836 20:57:51 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:57.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:57.836 20:57:51 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:57.836 20:57:51 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:57.836 20:57:51 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:57.836 20:57:51 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:57.836 20:57:51 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:57.836 20:57:51 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:57.836 20:57:51 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:57.836 20:57:51 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:57.836 20:57:51 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:57.836 20:57:51 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:57.836 20:57:51 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:57.836 20:57:51 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:57.836 20:57:51 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:57.836 20:57:51 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:57.836 20:57:51 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:57.836 20:57:51 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.8Dr1yCRDVW 00:37:57.836 20:57:51 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:57.836 20:57:51 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:57.836 20:57:51 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:57.836 20:57:51 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:57.837 20:57:51 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:57.837 20:57:51 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:57.837 20:57:51 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:58.095 20:57:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.8Dr1yCRDVW 00:37:58.095 20:57:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.8Dr1yCRDVW 00:37:58.095 20:57:51 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.8Dr1yCRDVW 00:37:58.095 20:57:51 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:58.095 20:57:51 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:58.095 20:57:51 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:58.095 20:57:51 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:58.095 20:57:51 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:58.095 20:57:51 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:58.095 20:57:51 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.9WYQt1jvll 00:37:58.095 20:57:51 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:58.095 20:57:51 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:58.096 20:57:51 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:58.096 20:57:51 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:58.096 20:57:51 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:37:58.096 20:57:51 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:58.096 20:57:51 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:58.096 20:57:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.9WYQt1jvll 00:37:58.096 20:57:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.9WYQt1jvll 00:37:58.096 20:57:51 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.9WYQt1jvll 00:37:58.096 20:57:51 keyring_file -- keyring/file.sh@30 -- # tgtpid=663812 00:37:58.096 20:57:51 keyring_file -- keyring/file.sh@32 -- # waitforlisten 663812 00:37:58.096 20:57:51 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:58.096 20:57:51 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 663812 ']' 00:37:58.096 20:57:51 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:58.096 20:57:51 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:58.096 20:57:51 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:58.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:58.096 20:57:51 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:58.096 20:57:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:58.096 [2024-12-05 20:57:51.382669] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:37:58.096 [2024-12-05 20:57:51.382717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid663812 ] 00:37:58.096 [2024-12-05 20:57:51.451829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:58.096 [2024-12-05 20:57:51.490920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:59.032 20:57:52 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:59.032 20:57:52 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:37:59.032 20:57:52 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:59.032 20:57:52 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.032 20:57:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:59.032 [2024-12-05 20:57:52.186135] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:59.032 null0 00:37:59.032 [2024-12-05 20:57:52.218182] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:59.032 [2024-12-05 20:57:52.218440] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:59.032 20:57:52 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.032 20:57:52 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:59.032 20:57:52 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:59.032 20:57:52 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:59.032 20:57:52 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:59.032 20:57:52 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:59.032 20:57:52 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:59.032 20:57:52 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:59.032 20:57:52 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:59.032 20:57:52 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.032 20:57:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:59.032 [2024-12-05 20:57:52.250260] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:59.032 request: 00:37:59.032 { 00:37:59.032 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:59.032 "secure_channel": false, 00:37:59.032 "listen_address": { 00:37:59.032 "trtype": "tcp", 00:37:59.032 "traddr": "127.0.0.1", 00:37:59.032 "trsvcid": "4420" 00:37:59.032 }, 00:37:59.032 "method": "nvmf_subsystem_add_listener", 00:37:59.032 "req_id": 1 00:37:59.032 } 00:37:59.032 Got JSON-RPC error response 00:37:59.032 response: 00:37:59.032 { 00:37:59.032 "code": -32602, 00:37:59.032 "message": "Invalid parameters" 00:37:59.032 } 00:37:59.032 20:57:52 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:59.032 20:57:52 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:59.032 20:57:52 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:59.032 20:57:52 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:59.032 20:57:52 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:59.032 20:57:52 keyring_file -- keyring/file.sh@47 -- # bperfpid=663828 00:37:59.032 20:57:52 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:59.032 20:57:52 keyring_file -- keyring/file.sh@49 -- # waitforlisten 663828 /var/tmp/bperf.sock 00:37:59.032 20:57:52 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 663828 ']' 00:37:59.032 20:57:52 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:59.032 20:57:52 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:59.032 20:57:52 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:59.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:59.032 20:57:52 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:59.033 20:57:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:59.033 [2024-12-05 20:57:52.301166] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:37:59.033 [2024-12-05 20:57:52.301211] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid663828 ] 00:37:59.033 [2024-12-05 20:57:52.373628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:59.033 [2024-12-05 20:57:52.411127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:59.292 20:57:52 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:59.292 20:57:52 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:37:59.292 20:57:52 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8Dr1yCRDVW 00:37:59.292 20:57:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8Dr1yCRDVW 00:37:59.292 20:57:52 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.9WYQt1jvll 00:37:59.292 20:57:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.9WYQt1jvll 00:37:59.551 20:57:52 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:37:59.551 20:57:52 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:59.551 20:57:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:59.551 20:57:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:59.551 20:57:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:59.809 20:57:53 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.8Dr1yCRDVW == \/\t\m\p\/\t\m\p\.\8\D\r\1\y\C\R\D\V\W ]] 00:37:59.809 20:57:53 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:37:59.809 20:57:53 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:37:59.809 20:57:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:59.809 20:57:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:59.809 20:57:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:59.809 20:57:53 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.9WYQt1jvll == \/\t\m\p\/\t\m\p\.\9\W\Y\Q\t\1\j\v\l\l ]] 00:37:59.809 20:57:53 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:37:59.809 20:57:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:59.809 20:57:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:59.809 20:57:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:59.809 20:57:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:59.809 20:57:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:00.067 20:57:53 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:00.067 20:57:53 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:38:00.067 20:57:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:00.067 20:57:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:00.067 20:57:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:00.067 20:57:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:00.067 20:57:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:00.326 20:57:53 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:38:00.326 20:57:53 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:00.326 20:57:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:00.326 [2024-12-05 20:57:53.741530] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:00.585 nvme0n1 00:38:00.585 20:57:53 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:38:00.585 20:57:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:00.585 20:57:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:00.585 20:57:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:00.585 20:57:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:00.585 20:57:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:00.585 20:57:54 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:38:00.585 20:57:54 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:38:00.585 20:57:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:00.585 20:57:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:00.585 20:57:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:00.585 20:57:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:00.585 20:57:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:00.843 20:57:54 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:38:00.843 20:57:54 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:01.102 Running I/O for 1 seconds... 00:38:02.038 20901.00 IOPS, 81.64 MiB/s 00:38:02.038 Latency(us) 00:38:02.038 [2024-12-05T19:57:55.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:02.038 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:02.038 nvme0n1 : 1.00 20944.25 81.81 0.00 0.00 6100.35 2532.07 14894.55 00:38:02.038 [2024-12-05T19:57:55.479Z] =================================================================================================================== 00:38:02.038 [2024-12-05T19:57:55.479Z] Total : 20944.25 81.81 0.00 0.00 6100.35 2532.07 14894.55 00:38:02.038 { 00:38:02.038 "results": [ 00:38:02.038 { 00:38:02.038 "job": "nvme0n1", 00:38:02.038 "core_mask": "0x2", 00:38:02.038 "workload": "randrw", 00:38:02.038 "percentage": 50, 00:38:02.038 "status": "finished", 00:38:02.038 "queue_depth": 128, 00:38:02.038 "io_size": 4096, 00:38:02.038 "runtime": 1.004142, 00:38:02.038 "iops": 20944.248920969345, 00:38:02.038 "mibps": 81.8134723475365, 00:38:02.038 "io_failed": 0, 00:38:02.038 "io_timeout": 0, 00:38:02.038 "avg_latency_us": 6100.345051158247, 00:38:02.038 "min_latency_us": 2532.072727272727, 00:38:02.038 "max_latency_us": 14894.545454545454 00:38:02.038 } 00:38:02.038 ], 00:38:02.038 "core_count": 1 00:38:02.038 } 00:38:02.038 20:57:55 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:02.038 20:57:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:02.296 20:57:55 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:38:02.296 20:57:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:02.296 20:57:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:02.296 20:57:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:02.296 20:57:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:02.296 20:57:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:02.296 20:57:55 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:02.296 20:57:55 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:38:02.296 20:57:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:02.296 20:57:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:02.296 20:57:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:02.296 20:57:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:02.296 20:57:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:02.555 20:57:55 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:38:02.555 20:57:55 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:02.555 20:57:55 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:02.555 20:57:55 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:02.555 20:57:55 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:02.555 20:57:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:02.555 20:57:55 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:02.555 20:57:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:02.555 20:57:55 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:02.555 20:57:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:02.813 [2024-12-05 20:57:56.040161] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:02.813 [2024-12-05 20:57:56.040879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a62330 (107): Transport endpoint is not connected 00:38:02.813 [2024-12-05 20:57:56.041874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a62330 (9): Bad file descriptor 00:38:02.813 [2024-12-05 20:57:56.042876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:02.813 [2024-12-05 20:57:56.042884] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:02.813 [2024-12-05 20:57:56.042891] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:02.813 [2024-12-05 20:57:56.042899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:02.813 request: 00:38:02.813 { 00:38:02.813 "name": "nvme0", 00:38:02.813 "trtype": "tcp", 00:38:02.813 "traddr": "127.0.0.1", 00:38:02.813 "adrfam": "ipv4", 00:38:02.813 "trsvcid": "4420", 00:38:02.813 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:02.813 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:02.813 "prchk_reftag": false, 00:38:02.813 "prchk_guard": false, 00:38:02.813 "hdgst": false, 00:38:02.813 "ddgst": false, 00:38:02.813 "psk": "key1", 00:38:02.813 "allow_unrecognized_csi": false, 00:38:02.813 "method": "bdev_nvme_attach_controller", 00:38:02.813 "req_id": 1 00:38:02.813 } 00:38:02.813 Got JSON-RPC error response 00:38:02.813 response: 00:38:02.813 { 00:38:02.813 "code": -5, 00:38:02.813 "message": "Input/output error" 00:38:02.813 } 00:38:02.813 20:57:56 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:02.813 20:57:56 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:02.813 20:57:56 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:02.813 20:57:56 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:02.813 20:57:56 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:38:02.813 20:57:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:02.813 20:57:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:02.813 20:57:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:02.813 20:57:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:02.813 20:57:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:02.813 20:57:56 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:02.813 20:57:56 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:38:02.813 20:57:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:02.813 20:57:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:02.813 20:57:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:02.813 20:57:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:02.813 20:57:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:03.071 20:57:56 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:38:03.071 20:57:56 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:38:03.071 20:57:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:03.329 20:57:56 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:38:03.329 20:57:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:03.588 20:57:56 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:38:03.588 20:57:56 keyring_file -- keyring/file.sh@78 -- # jq length 00:38:03.588 20:57:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:03.588 20:57:56 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:38:03.588 20:57:56 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.8Dr1yCRDVW 00:38:03.588 20:57:56 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.8Dr1yCRDVW 00:38:03.588 20:57:56 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:03.588 20:57:56 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.8Dr1yCRDVW 00:38:03.588 20:57:56 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:03.588 20:57:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:03.588 20:57:56 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:03.588 20:57:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:03.588 20:57:56 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8Dr1yCRDVW 00:38:03.588 20:57:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8Dr1yCRDVW 00:38:03.847 [2024-12-05 20:57:57.147787] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.8Dr1yCRDVW': 0100660 00:38:03.847 [2024-12-05 20:57:57.147811] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:03.847 request: 00:38:03.847 { 00:38:03.847 "name": "key0", 00:38:03.847 "path": "/tmp/tmp.8Dr1yCRDVW", 00:38:03.847 "method": "keyring_file_add_key", 00:38:03.847 "req_id": 1 00:38:03.847 } 00:38:03.847 Got JSON-RPC error response 00:38:03.847 response: 00:38:03.847 { 00:38:03.847 "code": -1, 00:38:03.847 "message": "Operation not permitted" 00:38:03.847 } 00:38:03.847 20:57:57 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:03.847 20:57:57 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:03.847 20:57:57 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:03.847 20:57:57 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:03.847 20:57:57 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.8Dr1yCRDVW 00:38:03.847 20:57:57 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8Dr1yCRDVW 00:38:03.847 20:57:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8Dr1yCRDVW 00:38:04.106 20:57:57 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.8Dr1yCRDVW 00:38:04.106 20:57:57 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:38:04.106 20:57:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:04.106 20:57:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:04.106 20:57:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:04.106 20:57:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:04.106 20:57:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:04.106 20:57:57 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:38:04.106 20:57:57 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:04.106 20:57:57 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:04.106 20:57:57 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:04.106 20:57:57 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:04.106 20:57:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:04.106 20:57:57 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:04.106 20:57:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:04.106 20:57:57 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:04.106 20:57:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:04.365 [2024-12-05 20:57:57.693238] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.8Dr1yCRDVW': No such file or directory 00:38:04.365 [2024-12-05 20:57:57.693260] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:04.365 [2024-12-05 20:57:57.693275] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:04.365 [2024-12-05 20:57:57.693282] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:38:04.365 [2024-12-05 20:57:57.693289] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:04.365 [2024-12-05 20:57:57.693294] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:04.365 request: 00:38:04.365 { 00:38:04.365 "name": "nvme0", 00:38:04.365 "trtype": "tcp", 00:38:04.365 "traddr": "127.0.0.1", 00:38:04.365 "adrfam": "ipv4", 00:38:04.365 "trsvcid": "4420", 00:38:04.365 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:04.365 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:04.365 "prchk_reftag": false, 00:38:04.365 "prchk_guard": false, 00:38:04.365 "hdgst": false, 00:38:04.365 "ddgst": false, 00:38:04.365 "psk": "key0", 00:38:04.365 "allow_unrecognized_csi": false, 00:38:04.365 "method": "bdev_nvme_attach_controller", 00:38:04.365 "req_id": 1 00:38:04.365 } 00:38:04.365 Got JSON-RPC error response 00:38:04.365 response: 00:38:04.365 { 00:38:04.365 "code": -19, 00:38:04.365 "message": "No such device" 00:38:04.365 } 00:38:04.365 20:57:57 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:04.365 20:57:57 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:04.365 20:57:57 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:04.365 20:57:57 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:04.365 20:57:57 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:38:04.365 20:57:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:04.624 20:57:57 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:04.624 20:57:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:04.624 20:57:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:04.624 20:57:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:04.624 20:57:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:04.624 20:57:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:04.624 20:57:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.LYjgPOt1HL 00:38:04.624 20:57:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:04.624 20:57:57 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:04.624 20:57:57 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:04.624 20:57:57 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:04.624 20:57:57 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:04.624 20:57:57 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:04.624 20:57:57 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:04.624 20:57:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.LYjgPOt1HL 00:38:04.624 20:57:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.LYjgPOt1HL 00:38:04.624 20:57:57 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.LYjgPOt1HL 00:38:04.624 20:57:57 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LYjgPOt1HL 00:38:04.624 20:57:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LYjgPOt1HL 00:38:04.883 20:57:58 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:04.883 20:57:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:04.883 nvme0n1 00:38:04.883 20:57:58 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:38:05.142 20:57:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:05.142 20:57:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:05.142 20:57:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:05.142 20:57:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:05.142 20:57:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:05.142 20:57:58 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:38:05.142 20:57:58 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:38:05.142 20:57:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:05.400 20:57:58 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:38:05.400 20:57:58 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:38:05.400 20:57:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:05.400 20:57:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:05.400 20:57:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:05.659 20:57:58 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:38:05.659 20:57:58 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:38:05.659 20:57:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:05.659 20:57:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:05.659 20:57:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:05.659 20:57:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:05.659 20:57:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:05.659 20:57:59 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:38:05.659 20:57:59 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:05.659 20:57:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:05.917 20:57:59 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:38:05.917 20:57:59 keyring_file -- keyring/file.sh@105 -- # jq length 00:38:05.917 20:57:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:06.174 20:57:59 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:38:06.174 20:57:59 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LYjgPOt1HL 00:38:06.174 20:57:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LYjgPOt1HL 00:38:06.174 20:57:59 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.9WYQt1jvll 00:38:06.174 20:57:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.9WYQt1jvll 00:38:06.432 20:57:59 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:06.432 20:57:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:06.690 nvme0n1 00:38:06.690 20:58:00 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:38:06.690 20:58:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:06.949 20:58:00 keyring_file -- keyring/file.sh@113 -- # config='{ 00:38:06.949 "subsystems": [ 00:38:06.949 { 00:38:06.949 "subsystem": "keyring", 00:38:06.949 "config": [ 00:38:06.949 { 00:38:06.949 "method": "keyring_file_add_key", 00:38:06.949 "params": { 00:38:06.949 "name": "key0", 00:38:06.949 "path": "/tmp/tmp.LYjgPOt1HL" 00:38:06.949 } 00:38:06.949 }, 00:38:06.949 { 00:38:06.949 "method": "keyring_file_add_key", 00:38:06.949 "params": { 00:38:06.949 "name": "key1", 00:38:06.949 "path": "/tmp/tmp.9WYQt1jvll" 00:38:06.949 } 00:38:06.949 } 00:38:06.949 ] 00:38:06.949 }, 00:38:06.949 { 00:38:06.949 "subsystem": "iobuf", 00:38:06.949 "config": [ 00:38:06.949 { 00:38:06.949 "method": "iobuf_set_options", 00:38:06.949 "params": { 00:38:06.949 "small_pool_count": 8192, 00:38:06.949 "large_pool_count": 1024, 00:38:06.949 "small_bufsize": 8192, 00:38:06.949 "large_bufsize": 135168, 00:38:06.949 "enable_numa": false 00:38:06.949 } 00:38:06.949 } 00:38:06.949 ] 00:38:06.949 }, 00:38:06.949 { 00:38:06.949 "subsystem": "sock", 00:38:06.949 "config": [ 00:38:06.949 { 00:38:06.949 "method": "sock_set_default_impl", 00:38:06.949 "params": { 00:38:06.949 "impl_name": "posix" 00:38:06.949 } 00:38:06.949 }, 00:38:06.949 { 00:38:06.949 "method": "sock_impl_set_options", 00:38:06.949 "params": { 00:38:06.949 "impl_name": "ssl", 00:38:06.949 "recv_buf_size": 4096, 00:38:06.949 "send_buf_size": 4096, 00:38:06.949 "enable_recv_pipe": true, 00:38:06.949 "enable_quickack": false, 00:38:06.949 "enable_placement_id": 0, 00:38:06.949 "enable_zerocopy_send_server": true, 00:38:06.949 "enable_zerocopy_send_client": false, 00:38:06.949 "zerocopy_threshold": 0, 00:38:06.949 "tls_version": 0, 00:38:06.949 "enable_ktls": false 00:38:06.949 } 00:38:06.950 }, 00:38:06.950 { 00:38:06.950 "method": "sock_impl_set_options", 00:38:06.950 "params": { 00:38:06.950 "impl_name": "posix", 00:38:06.950 "recv_buf_size": 2097152, 00:38:06.950 "send_buf_size": 2097152, 00:38:06.950 "enable_recv_pipe": true, 00:38:06.950 "enable_quickack": false, 00:38:06.950 "enable_placement_id": 0, 00:38:06.950 "enable_zerocopy_send_server": true, 00:38:06.950 "enable_zerocopy_send_client": false, 00:38:06.950 "zerocopy_threshold": 0, 00:38:06.950 "tls_version": 0, 00:38:06.950 "enable_ktls": false 00:38:06.950 } 00:38:06.950 } 00:38:06.950 ] 00:38:06.950 }, 00:38:06.950 { 00:38:06.950 "subsystem": "vmd", 00:38:06.950 "config": [] 00:38:06.950 }, 00:38:06.950 { 00:38:06.950 "subsystem": "accel", 00:38:06.950 "config": [ 00:38:06.950 { 00:38:06.950 "method": "accel_set_options", 00:38:06.950 "params": { 00:38:06.950 "small_cache_size": 128, 00:38:06.950 "large_cache_size": 16, 00:38:06.950 "task_count": 2048, 00:38:06.950 "sequence_count": 2048, 00:38:06.950 "buf_count": 2048 00:38:06.950 } 00:38:06.950 } 00:38:06.950 ] 00:38:06.950 }, 00:38:06.950 { 00:38:06.950 "subsystem": "bdev", 00:38:06.950 "config": [ 00:38:06.950 { 00:38:06.950 "method": "bdev_set_options", 00:38:06.950 "params": { 00:38:06.950 "bdev_io_pool_size": 65535, 00:38:06.950 "bdev_io_cache_size": 256, 00:38:06.950 "bdev_auto_examine": true, 00:38:06.950 "iobuf_small_cache_size": 128, 00:38:06.950 "iobuf_large_cache_size": 16 00:38:06.950 } 00:38:06.950 }, 00:38:06.950 { 00:38:06.950 "method": "bdev_raid_set_options", 00:38:06.950 "params": { 00:38:06.950 "process_window_size_kb": 1024, 00:38:06.950 "process_max_bandwidth_mb_sec": 0 00:38:06.950 } 00:38:06.950 }, 00:38:06.950 { 00:38:06.950 "method": "bdev_iscsi_set_options", 00:38:06.950 "params": { 00:38:06.950 "timeout_sec": 30 00:38:06.950 } 00:38:06.950 }, 00:38:06.950 { 00:38:06.950 "method": "bdev_nvme_set_options", 00:38:06.950 "params": { 00:38:06.950 "action_on_timeout": "none", 00:38:06.950 "timeout_us": 0, 00:38:06.950 "timeout_admin_us": 0, 00:38:06.950 "keep_alive_timeout_ms": 10000, 00:38:06.950 "arbitration_burst": 0, 00:38:06.950 "low_priority_weight": 0, 00:38:06.950 "medium_priority_weight": 0, 00:38:06.950 "high_priority_weight": 0, 00:38:06.950 "nvme_adminq_poll_period_us": 10000, 00:38:06.950 "nvme_ioq_poll_period_us": 0, 00:38:06.950 "io_queue_requests": 512, 00:38:06.950 "delay_cmd_submit": true, 00:38:06.950 "transport_retry_count": 4, 00:38:06.950 "bdev_retry_count": 3, 00:38:06.950 "transport_ack_timeout": 0, 00:38:06.950 "ctrlr_loss_timeout_sec": 0, 00:38:06.950 "reconnect_delay_sec": 0, 00:38:06.950 "fast_io_fail_timeout_sec": 0, 00:38:06.950 "disable_auto_failback": false, 00:38:06.950 "generate_uuids": false, 00:38:06.950 "transport_tos": 0, 00:38:06.950 "nvme_error_stat": false, 00:38:06.950 "rdma_srq_size": 0, 00:38:06.950 "io_path_stat": false, 00:38:06.950 "allow_accel_sequence": false, 00:38:06.950 "rdma_max_cq_size": 0, 00:38:06.950 "rdma_cm_event_timeout_ms": 0, 00:38:06.950 "dhchap_digests": [ 00:38:06.950 "sha256", 00:38:06.950 "sha384", 00:38:06.950 "sha512" 00:38:06.950 ], 00:38:06.950 "dhchap_dhgroups": [ 00:38:06.950 "null", 00:38:06.950 "ffdhe2048", 00:38:06.950 "ffdhe3072", 00:38:06.950 "ffdhe4096", 00:38:06.950 "ffdhe6144", 00:38:06.950 "ffdhe8192" 00:38:06.950 ] 00:38:06.950 } 00:38:06.950 }, 00:38:06.950 { 00:38:06.950 "method": "bdev_nvme_attach_controller", 00:38:06.950 "params": { 00:38:06.950 "name": "nvme0", 00:38:06.950 "trtype": "TCP", 00:38:06.950 "adrfam": "IPv4", 00:38:06.950 "traddr": "127.0.0.1", 00:38:06.950 "trsvcid": "4420", 00:38:06.950 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:06.950 "prchk_reftag": false, 00:38:06.950 "prchk_guard": false, 00:38:06.950 "ctrlr_loss_timeout_sec": 0, 00:38:06.950 "reconnect_delay_sec": 0, 00:38:06.950 "fast_io_fail_timeout_sec": 0, 00:38:06.950 "psk": "key0", 00:38:06.950 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:06.950 "hdgst": false, 00:38:06.950 "ddgst": false, 00:38:06.950 "multipath": "multipath" 00:38:06.950 } 00:38:06.950 }, 00:38:06.950 { 00:38:06.950 "method": "bdev_nvme_set_hotplug", 00:38:06.950 "params": { 00:38:06.950 "period_us": 100000, 00:38:06.950 "enable": false 00:38:06.950 } 00:38:06.950 }, 00:38:06.950 { 00:38:06.950 "method": "bdev_wait_for_examine" 00:38:06.950 } 00:38:06.950 ] 00:38:06.950 }, 00:38:06.950 { 00:38:06.950 "subsystem": "nbd", 00:38:06.950 "config": [] 00:38:06.950 } 00:38:06.950 ] 00:38:06.950 }' 00:38:06.950 20:58:00 keyring_file -- keyring/file.sh@115 -- # killprocess 663828 00:38:06.950 20:58:00 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 663828 ']' 00:38:06.950 20:58:00 keyring_file -- common/autotest_common.sh@958 -- # kill -0 663828 00:38:06.950 20:58:00 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:06.950 20:58:00 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:06.950 20:58:00 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 663828 00:38:06.950 20:58:00 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:06.950 20:58:00 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:06.950 20:58:00 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 663828' 00:38:06.950 killing process with pid 663828 00:38:06.950 20:58:00 keyring_file -- common/autotest_common.sh@973 -- # kill 663828 00:38:06.950 Received shutdown signal, test time was about 1.000000 seconds 00:38:06.950 00:38:06.950 Latency(us) 00:38:06.950 [2024-12-05T19:58:00.391Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:06.950 [2024-12-05T19:58:00.391Z] =================================================================================================================== 00:38:06.950 [2024-12-05T19:58:00.391Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:06.950 20:58:00 keyring_file -- common/autotest_common.sh@978 -- # wait 663828 00:38:07.210 20:58:00 keyring_file -- keyring/file.sh@118 -- # bperfpid=665532 00:38:07.210 20:58:00 keyring_file -- keyring/file.sh@120 -- # waitforlisten 665532 /var/tmp/bperf.sock 00:38:07.210 20:58:00 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 665532 ']' 00:38:07.210 20:58:00 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:07.210 20:58:00 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:07.210 20:58:00 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:07.210 20:58:00 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:07.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:07.210 20:58:00 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:38:07.210 "subsystems": [ 00:38:07.210 { 00:38:07.210 "subsystem": "keyring", 00:38:07.210 "config": [ 00:38:07.210 { 00:38:07.210 "method": "keyring_file_add_key", 00:38:07.210 "params": { 00:38:07.210 "name": "key0", 00:38:07.210 "path": "/tmp/tmp.LYjgPOt1HL" 00:38:07.210 } 00:38:07.210 }, 00:38:07.210 { 00:38:07.210 "method": "keyring_file_add_key", 00:38:07.210 "params": { 00:38:07.210 "name": "key1", 00:38:07.210 "path": "/tmp/tmp.9WYQt1jvll" 00:38:07.210 } 00:38:07.210 } 00:38:07.210 ] 00:38:07.210 }, 00:38:07.210 { 00:38:07.210 "subsystem": "iobuf", 00:38:07.210 "config": [ 00:38:07.210 { 00:38:07.210 "method": "iobuf_set_options", 00:38:07.210 "params": { 00:38:07.210 "small_pool_count": 8192, 00:38:07.210 "large_pool_count": 1024, 00:38:07.210 "small_bufsize": 8192, 00:38:07.210 "large_bufsize": 135168, 00:38:07.210 "enable_numa": false 00:38:07.210 } 00:38:07.210 } 00:38:07.210 ] 00:38:07.210 }, 00:38:07.210 { 00:38:07.210 "subsystem": "sock", 00:38:07.210 "config": [ 00:38:07.210 { 00:38:07.210 "method": "sock_set_default_impl", 00:38:07.210 "params": { 00:38:07.210 "impl_name": "posix" 00:38:07.210 } 00:38:07.210 }, 00:38:07.210 { 00:38:07.210 "method": "sock_impl_set_options", 00:38:07.210 "params": { 00:38:07.210 "impl_name": "ssl", 00:38:07.210 "recv_buf_size": 4096, 00:38:07.210 "send_buf_size": 4096, 00:38:07.210 "enable_recv_pipe": true, 00:38:07.210 "enable_quickack": false, 00:38:07.210 "enable_placement_id": 0, 00:38:07.210 "enable_zerocopy_send_server": true, 00:38:07.211 "enable_zerocopy_send_client": false, 00:38:07.211 "zerocopy_threshold": 0, 00:38:07.211 "tls_version": 0, 00:38:07.211 "enable_ktls": false 00:38:07.211 } 00:38:07.211 }, 00:38:07.211 { 00:38:07.211 "method": "sock_impl_set_options", 00:38:07.211 "params": { 00:38:07.211 "impl_name": "posix", 00:38:07.211 "recv_buf_size": 2097152, 00:38:07.211 "send_buf_size": 2097152, 00:38:07.211 "enable_recv_pipe": true, 00:38:07.211 "enable_quickack": false, 00:38:07.211 "enable_placement_id": 0, 00:38:07.211 "enable_zerocopy_send_server": true, 00:38:07.211 "enable_zerocopy_send_client": false, 00:38:07.211 "zerocopy_threshold": 0, 00:38:07.211 "tls_version": 0, 00:38:07.211 "enable_ktls": false 00:38:07.211 } 00:38:07.211 } 00:38:07.211 ] 00:38:07.211 }, 00:38:07.211 { 00:38:07.211 "subsystem": "vmd", 00:38:07.211 "config": [] 00:38:07.211 }, 00:38:07.211 { 00:38:07.211 "subsystem": "accel", 00:38:07.211 "config": [ 00:38:07.211 { 00:38:07.211 "method": "accel_set_options", 00:38:07.211 "params": { 00:38:07.211 "small_cache_size": 128, 00:38:07.211 "large_cache_size": 16, 00:38:07.211 "task_count": 2048, 00:38:07.211 "sequence_count": 2048, 00:38:07.211 "buf_count": 2048 00:38:07.211 } 00:38:07.211 } 00:38:07.211 ] 00:38:07.211 }, 00:38:07.211 { 00:38:07.211 "subsystem": "bdev", 00:38:07.211 "config": [ 00:38:07.211 { 00:38:07.211 "method": "bdev_set_options", 00:38:07.211 "params": { 00:38:07.211 "bdev_io_pool_size": 65535, 00:38:07.211 "bdev_io_cache_size": 256, 00:38:07.211 "bdev_auto_examine": true, 00:38:07.211 "iobuf_small_cache_size": 128, 00:38:07.211 "iobuf_large_cache_size": 16 00:38:07.211 } 00:38:07.211 }, 00:38:07.211 { 00:38:07.211 "method": "bdev_raid_set_options", 00:38:07.211 "params": { 00:38:07.211 "process_window_size_kb": 1024, 00:38:07.211 "process_max_bandwidth_mb_sec": 0 00:38:07.211 } 00:38:07.211 }, 00:38:07.211 { 00:38:07.211 "method": "bdev_iscsi_set_options", 00:38:07.211 "params": { 00:38:07.211 "timeout_sec": 30 00:38:07.211 } 00:38:07.211 }, 00:38:07.211 { 00:38:07.211 "method": "bdev_nvme_set_options", 00:38:07.211 "params": { 00:38:07.211 "action_on_timeout": "none", 00:38:07.211 "timeout_us": 0, 00:38:07.211 "timeout_admin_us": 0, 00:38:07.211 "keep_alive_timeout_ms": 10000, 00:38:07.211 "arbitration_burst": 0, 00:38:07.211 "low_priority_weight": 0, 00:38:07.211 "medium_priority_weight": 0, 00:38:07.211 "high_priority_weight": 0, 00:38:07.211 "nvme_adminq_poll_period_us": 10000, 00:38:07.211 "nvme_ioq_poll_period_us": 0, 00:38:07.211 "io_queue_requests": 512, 00:38:07.211 "delay_cmd_submit": true, 00:38:07.211 "transport_retry_count": 4, 00:38:07.211 "bdev_retry_count": 3, 00:38:07.211 "transport_ack_timeout": 0, 00:38:07.211 "ctrlr_loss_timeout_sec": 0, 00:38:07.211 "reconnect_delay_sec": 0, 00:38:07.211 "fast_io_fail_timeout_sec": 0, 00:38:07.211 "disable_auto_failback": false, 00:38:07.211 "generate_uuids": false, 00:38:07.211 "transport_tos": 0, 00:38:07.211 "nvme_error_stat": false, 00:38:07.211 "rdma_srq_size": 0, 00:38:07.211 "io_path_stat": false, 00:38:07.211 "allow_accel_sequence": false, 00:38:07.211 "rdma_max_cq_size": 0, 00:38:07.211 "rdma_cm_event_timeout_ms": 0, 00:38:07.211 "dhchap_digests": [ 00:38:07.211 "sha256", 00:38:07.211 "sha384", 00:38:07.211 "sha512" 00:38:07.211 ], 00:38:07.211 "dhchap_dhgroups": [ 00:38:07.211 "null", 00:38:07.211 "ffdhe2048", 00:38:07.211 "ffdhe3072", 00:38:07.211 "ffdhe4096", 00:38:07.211 "ffdhe6144", 00:38:07.211 "ffdhe8192" 00:38:07.211 ] 00:38:07.211 } 00:38:07.211 }, 00:38:07.211 { 00:38:07.211 "method": "bdev_nvme_attach_controller", 00:38:07.211 "params": { 00:38:07.211 "name": "nvme0", 00:38:07.211 "trtype": "TCP", 00:38:07.211 "adrfam": "IPv4", 00:38:07.211 "traddr": "127.0.0.1", 00:38:07.211 "trsvcid": "4420", 00:38:07.211 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:07.211 "prchk_reftag": false, 00:38:07.211 "prchk_guard": false, 00:38:07.211 "ctrlr_loss_timeout_sec": 0, 00:38:07.211 "reconnect_delay_sec": 0, 00:38:07.211 "fast_io_fail_timeout_sec": 0, 00:38:07.211 "psk": "key0", 00:38:07.211 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:07.211 "hdgst": false, 00:38:07.211 "ddgst": false, 00:38:07.211 "multipath": "multipath" 00:38:07.211 } 00:38:07.211 }, 00:38:07.211 { 00:38:07.211 "method": "bdev_nvme_set_hotplug", 00:38:07.211 "params": { 00:38:07.211 "period_us": 100000, 00:38:07.211 "enable": false 00:38:07.211 } 00:38:07.211 }, 00:38:07.211 { 00:38:07.211 "method": "bdev_wait_for_examine" 00:38:07.211 } 00:38:07.211 ] 00:38:07.211 }, 00:38:07.211 { 00:38:07.211 "subsystem": "nbd", 00:38:07.211 "config": [] 00:38:07.211 } 00:38:07.211 ] 00:38:07.211 }' 00:38:07.211 20:58:00 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:07.211 20:58:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:07.211 [2024-12-05 20:58:00.547168] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:38:07.211 [2024-12-05 20:58:00.547215] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid665532 ] 00:38:07.211 [2024-12-05 20:58:00.620049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:07.470 [2024-12-05 20:58:00.659203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:07.470 [2024-12-05 20:58:00.819251] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:08.037 20:58:01 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:08.037 20:58:01 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:08.037 20:58:01 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:38:08.037 20:58:01 keyring_file -- keyring/file.sh@121 -- # jq length 00:38:08.037 20:58:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:08.296 20:58:01 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:08.296 20:58:01 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:38:08.296 20:58:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:08.296 20:58:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:08.296 20:58:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:08.296 20:58:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:08.296 20:58:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:08.296 20:58:01 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:38:08.296 20:58:01 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:38:08.296 20:58:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:08.296 20:58:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:08.296 20:58:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:08.296 20:58:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:08.296 20:58:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:08.555 20:58:01 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:38:08.555 20:58:01 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:38:08.555 20:58:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:08.555 20:58:01 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:38:08.814 20:58:02 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:38:08.814 20:58:02 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:08.814 20:58:02 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.LYjgPOt1HL /tmp/tmp.9WYQt1jvll 00:38:08.814 20:58:02 keyring_file -- keyring/file.sh@20 -- # killprocess 665532 00:38:08.814 20:58:02 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 665532 ']' 00:38:08.814 20:58:02 keyring_file -- common/autotest_common.sh@958 -- # kill -0 665532 00:38:08.815 20:58:02 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:08.815 20:58:02 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:08.815 20:58:02 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 665532 00:38:08.815 20:58:02 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:08.815 20:58:02 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:08.815 20:58:02 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 665532' 00:38:08.815 killing process with pid 665532 00:38:08.815 20:58:02 keyring_file -- common/autotest_common.sh@973 -- # kill 665532 00:38:08.815 Received shutdown signal, test time was about 1.000000 seconds 00:38:08.815 00:38:08.815 Latency(us) 00:38:08.815 [2024-12-05T19:58:02.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:08.815 [2024-12-05T19:58:02.256Z] =================================================================================================================== 00:38:08.815 [2024-12-05T19:58:02.256Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:08.815 20:58:02 keyring_file -- common/autotest_common.sh@978 -- # wait 665532 00:38:09.074 20:58:02 keyring_file -- keyring/file.sh@21 -- # killprocess 663812 00:38:09.074 20:58:02 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 663812 ']' 00:38:09.074 20:58:02 keyring_file -- common/autotest_common.sh@958 -- # kill -0 663812 00:38:09.074 20:58:02 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:09.074 20:58:02 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:09.074 20:58:02 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 663812 00:38:09.074 20:58:02 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:09.074 20:58:02 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:09.074 20:58:02 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 663812' 00:38:09.074 killing process with pid 663812 00:38:09.074 20:58:02 keyring_file -- common/autotest_common.sh@973 -- # kill 663812 00:38:09.074 20:58:02 keyring_file -- common/autotest_common.sh@978 -- # wait 663812 00:38:09.333 00:38:09.333 real 0m11.607s 00:38:09.333 user 0m28.062s 00:38:09.333 sys 0m2.593s 00:38:09.333 20:58:02 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:09.333 20:58:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:09.333 ************************************ 00:38:09.333 END TEST keyring_file 00:38:09.333 ************************************ 00:38:09.333 20:58:02 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:38:09.333 20:58:02 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:09.333 20:58:02 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:09.333 20:58:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:09.333 20:58:02 -- common/autotest_common.sh@10 -- # set +x 00:38:09.333 ************************************ 00:38:09.333 START TEST keyring_linux 00:38:09.333 ************************************ 00:38:09.333 20:58:02 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:09.333 Joined session keyring: 795673644 00:38:09.593 * Looking for test storage... 00:38:09.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:09.593 20:58:02 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:09.593 20:58:02 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:38:09.593 20:58:02 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:09.593 20:58:02 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:09.593 20:58:02 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:09.593 20:58:02 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:09.593 20:58:02 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:09.593 20:58:02 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:38:09.593 20:58:02 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:38:09.593 20:58:02 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:38:09.593 20:58:02 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:38:09.593 20:58:02 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:38:09.593 20:58:02 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:38:09.593 20:58:02 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:38:09.593 20:58:02 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:09.593 20:58:02 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:38:09.593 20:58:02 keyring_linux -- scripts/common.sh@345 -- # : 1 00:38:09.593 20:58:02 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:09.593 20:58:02 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:09.593 20:58:02 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:38:09.593 20:58:02 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:38:09.593 20:58:02 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:09.593 20:58:02 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:38:09.593 20:58:02 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:38:09.593 20:58:02 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:38:09.593 20:58:02 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:38:09.593 20:58:02 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:09.593 20:58:02 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:38:09.593 20:58:02 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:38:09.593 20:58:02 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:09.593 20:58:02 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:09.593 20:58:02 keyring_linux -- scripts/common.sh@368 -- # return 0 00:38:09.593 20:58:02 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:09.593 20:58:02 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:09.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:09.593 --rc genhtml_branch_coverage=1 00:38:09.593 --rc genhtml_function_coverage=1 00:38:09.593 --rc genhtml_legend=1 00:38:09.593 --rc geninfo_all_blocks=1 00:38:09.593 --rc geninfo_unexecuted_blocks=1 00:38:09.593 00:38:09.593 ' 00:38:09.593 20:58:02 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:09.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:09.593 --rc genhtml_branch_coverage=1 00:38:09.593 --rc genhtml_function_coverage=1 00:38:09.593 --rc genhtml_legend=1 00:38:09.593 --rc geninfo_all_blocks=1 00:38:09.593 --rc geninfo_unexecuted_blocks=1 00:38:09.593 00:38:09.593 ' 00:38:09.593 20:58:02 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:09.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:09.593 --rc genhtml_branch_coverage=1 00:38:09.593 --rc genhtml_function_coverage=1 00:38:09.594 --rc genhtml_legend=1 00:38:09.594 --rc geninfo_all_blocks=1 00:38:09.594 --rc geninfo_unexecuted_blocks=1 00:38:09.594 00:38:09.594 ' 00:38:09.594 20:58:02 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:09.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:09.594 --rc genhtml_branch_coverage=1 00:38:09.594 --rc genhtml_function_coverage=1 00:38:09.594 --rc genhtml_legend=1 00:38:09.594 --rc geninfo_all_blocks=1 00:38:09.594 --rc geninfo_unexecuted_blocks=1 00:38:09.594 00:38:09.594 ' 00:38:09.594 20:58:02 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:09.594 20:58:02 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:09.594 20:58:02 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:38:09.594 20:58:02 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:09.594 20:58:02 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:09.594 20:58:02 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:09.594 20:58:02 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.594 20:58:02 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.594 20:58:02 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.594 20:58:02 keyring_linux -- paths/export.sh@5 -- # export PATH 00:38:09.594 20:58:02 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:09.594 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:09.594 20:58:02 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:09.594 20:58:02 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:09.594 20:58:02 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:09.594 20:58:02 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:38:09.594 20:58:02 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:38:09.594 20:58:02 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:38:09.594 20:58:02 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:38:09.594 20:58:02 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:09.594 20:58:02 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:38:09.594 20:58:02 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:09.594 20:58:02 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:09.594 20:58:02 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:38:09.594 20:58:02 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:09.594 20:58:02 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:38:09.594 20:58:02 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:38:09.594 /tmp/:spdk-test:key0 00:38:09.594 20:58:02 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:38:09.594 20:58:02 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:09.594 20:58:02 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:38:09.594 20:58:02 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:09.594 20:58:02 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:09.594 20:58:02 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:38:09.594 20:58:02 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:09.594 20:58:02 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:09.594 20:58:03 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:38:09.594 20:58:03 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:38:09.594 /tmp/:spdk-test:key1 00:38:09.594 20:58:03 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=665903 00:38:09.594 20:58:03 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:09.594 20:58:03 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 665903 00:38:09.594 20:58:03 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 665903 ']' 00:38:09.594 20:58:03 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:09.594 20:58:03 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:09.594 20:58:03 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:09.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:09.594 20:58:03 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:09.594 20:58:03 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:09.853 [2024-12-05 20:58:03.068962] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:38:09.854 [2024-12-05 20:58:03.069011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid665903 ] 00:38:09.854 [2024-12-05 20:58:03.138880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:09.854 [2024-12-05 20:58:03.178195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:10.112 20:58:03 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:10.112 20:58:03 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:10.112 20:58:03 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:10.112 20:58:03 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.112 20:58:03 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:10.112 [2024-12-05 20:58:03.383443] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:10.112 null0 00:38:10.112 [2024-12-05 20:58:03.415503] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:10.112 [2024-12-05 20:58:03.415860] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:10.112 20:58:03 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.112 20:58:03 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:10.112 821120630 00:38:10.112 20:58:03 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:10.112 726020419 00:38:10.112 20:58:03 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=666085 00:38:10.112 20:58:03 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 666085 /var/tmp/bperf.sock 00:38:10.112 20:58:03 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:10.112 20:58:03 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 666085 ']' 00:38:10.112 20:58:03 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:10.112 20:58:03 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:10.112 20:58:03 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:10.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:10.112 20:58:03 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:10.112 20:58:03 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:10.112 [2024-12-05 20:58:03.487853] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:38:10.112 [2024-12-05 20:58:03.487896] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid666085 ] 00:38:10.371 [2024-12-05 20:58:03.559396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:10.371 [2024-12-05 20:58:03.598395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:10.371 20:58:03 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:10.371 20:58:03 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:10.371 20:58:03 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:10.371 20:58:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:10.371 20:58:03 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:10.371 20:58:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:10.631 20:58:04 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:10.631 20:58:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:10.889 [2024-12-05 20:58:04.202843] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:10.889 nvme0n1 00:38:10.889 20:58:04 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:38:10.889 20:58:04 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:38:10.889 20:58:04 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:10.889 20:58:04 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:10.889 20:58:04 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:10.889 20:58:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:11.147 20:58:04 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:38:11.147 20:58:04 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:11.147 20:58:04 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:38:11.147 20:58:04 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:38:11.147 20:58:04 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:11.147 20:58:04 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:38:11.147 20:58:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:11.405 20:58:04 keyring_linux -- keyring/linux.sh@25 -- # sn=821120630 00:38:11.405 20:58:04 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:38:11.405 20:58:04 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:11.405 20:58:04 keyring_linux -- keyring/linux.sh@26 -- # [[ 821120630 == \8\2\1\1\2\0\6\3\0 ]] 00:38:11.405 20:58:04 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 821120630 00:38:11.405 20:58:04 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:38:11.405 20:58:04 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:11.405 Running I/O for 1 seconds... 00:38:12.778 23438.00 IOPS, 91.55 MiB/s 00:38:12.778 Latency(us) 00:38:12.778 [2024-12-05T19:58:06.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:12.778 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:12.778 nvme0n1 : 1.01 23437.23 91.55 0.00 0.00 5444.56 4259.84 9949.56 00:38:12.778 [2024-12-05T19:58:06.219Z] =================================================================================================================== 00:38:12.778 [2024-12-05T19:58:06.219Z] Total : 23437.23 91.55 0.00 0.00 5444.56 4259.84 9949.56 00:38:12.778 { 00:38:12.778 "results": [ 00:38:12.778 { 00:38:12.778 "job": "nvme0n1", 00:38:12.778 "core_mask": "0x2", 00:38:12.778 "workload": "randread", 00:38:12.778 "status": "finished", 00:38:12.778 "queue_depth": 128, 00:38:12.778 "io_size": 4096, 00:38:12.778 "runtime": 1.005537, 00:38:12.778 "iops": 23437.22806818645, 00:38:12.778 "mibps": 91.55167214135332, 00:38:12.778 "io_failed": 0, 00:38:12.778 "io_timeout": 0, 00:38:12.778 "avg_latency_us": 5444.5640398554215, 00:38:12.778 "min_latency_us": 4259.84, 00:38:12.778 "max_latency_us": 9949.556363636364 00:38:12.778 } 00:38:12.778 ], 00:38:12.778 "core_count": 1 00:38:12.778 } 00:38:12.778 20:58:05 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:12.778 20:58:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:12.778 20:58:05 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:38:12.778 20:58:05 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:38:12.778 20:58:05 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:12.778 20:58:05 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:12.778 20:58:05 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:12.778 20:58:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:12.778 20:58:06 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:38:12.778 20:58:06 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:12.778 20:58:06 keyring_linux -- keyring/linux.sh@23 -- # return 00:38:12.778 20:58:06 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:12.778 20:58:06 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:38:12.778 20:58:06 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:12.778 20:58:06 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:12.778 20:58:06 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:12.778 20:58:06 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:12.778 20:58:06 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:12.778 20:58:06 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:12.778 20:58:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:13.037 [2024-12-05 20:58:06.345542] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:13.037 [2024-12-05 20:58:06.346263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa20c0 (107): Transport endpoint is not connected 00:38:13.037 [2024-12-05 20:58:06.347255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa20c0 (9): Bad file descriptor 00:38:13.037 [2024-12-05 20:58:06.348256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:13.037 [2024-12-05 20:58:06.348265] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:13.037 [2024-12-05 20:58:06.348272] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:13.037 [2024-12-05 20:58:06.348280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:13.037 request: 00:38:13.037 { 00:38:13.037 "name": "nvme0", 00:38:13.037 "trtype": "tcp", 00:38:13.037 "traddr": "127.0.0.1", 00:38:13.037 "adrfam": "ipv4", 00:38:13.037 "trsvcid": "4420", 00:38:13.037 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:13.037 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:13.037 "prchk_reftag": false, 00:38:13.037 "prchk_guard": false, 00:38:13.037 "hdgst": false, 00:38:13.037 "ddgst": false, 00:38:13.037 "psk": ":spdk-test:key1", 00:38:13.037 "allow_unrecognized_csi": false, 00:38:13.037 "method": "bdev_nvme_attach_controller", 00:38:13.037 "req_id": 1 00:38:13.037 } 00:38:13.037 Got JSON-RPC error response 00:38:13.037 response: 00:38:13.037 { 00:38:13.037 "code": -5, 00:38:13.037 "message": "Input/output error" 00:38:13.037 } 00:38:13.037 20:58:06 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:38:13.037 20:58:06 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:13.037 20:58:06 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:13.037 20:58:06 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:13.037 20:58:06 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:38:13.037 20:58:06 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:13.037 20:58:06 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:38:13.037 20:58:06 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:38:13.037 20:58:06 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:38:13.037 20:58:06 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:13.037 20:58:06 keyring_linux -- keyring/linux.sh@33 -- # sn=821120630 00:38:13.037 20:58:06 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 821120630 00:38:13.037 1 links removed 00:38:13.037 20:58:06 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:13.037 20:58:06 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:38:13.037 20:58:06 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:38:13.037 20:58:06 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:38:13.037 20:58:06 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:38:13.037 20:58:06 keyring_linux -- keyring/linux.sh@33 -- # sn=726020419 00:38:13.037 20:58:06 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 726020419 00:38:13.037 1 links removed 00:38:13.037 20:58:06 keyring_linux -- keyring/linux.sh@41 -- # killprocess 666085 00:38:13.037 20:58:06 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 666085 ']' 00:38:13.037 20:58:06 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 666085 00:38:13.037 20:58:06 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:13.037 20:58:06 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:13.037 20:58:06 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 666085 00:38:13.037 20:58:06 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:13.037 20:58:06 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:13.037 20:58:06 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 666085' 00:38:13.037 killing process with pid 666085 00:38:13.037 20:58:06 keyring_linux -- common/autotest_common.sh@973 -- # kill 666085 00:38:13.037 Received shutdown signal, test time was about 1.000000 seconds 00:38:13.037 00:38:13.037 Latency(us) 00:38:13.037 [2024-12-05T19:58:06.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:13.037 [2024-12-05T19:58:06.478Z] =================================================================================================================== 00:38:13.037 [2024-12-05T19:58:06.478Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:13.037 20:58:06 keyring_linux -- common/autotest_common.sh@978 -- # wait 666085 00:38:13.296 20:58:06 keyring_linux -- keyring/linux.sh@42 -- # killprocess 665903 00:38:13.296 20:58:06 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 665903 ']' 00:38:13.296 20:58:06 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 665903 00:38:13.296 20:58:06 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:13.296 20:58:06 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:13.296 20:58:06 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 665903 00:38:13.296 20:58:06 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:13.296 20:58:06 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:13.296 20:58:06 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 665903' 00:38:13.296 killing process with pid 665903 00:38:13.296 20:58:06 keyring_linux -- common/autotest_common.sh@973 -- # kill 665903 00:38:13.296 20:58:06 keyring_linux -- common/autotest_common.sh@978 -- # wait 665903 00:38:13.554 00:38:13.554 real 0m4.236s 00:38:13.554 user 0m7.802s 00:38:13.554 sys 0m1.495s 00:38:13.554 20:58:06 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:13.554 20:58:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:13.554 ************************************ 00:38:13.554 END TEST keyring_linux 00:38:13.554 ************************************ 00:38:13.554 20:58:06 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:13.554 20:58:06 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:13.554 20:58:06 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:38:13.554 20:58:06 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:38:13.554 20:58:06 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:38:13.554 20:58:06 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:13.554 20:58:06 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:13.554 20:58:06 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:13.554 20:58:06 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:38:13.554 20:58:06 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:13.554 20:58:06 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:38:13.554 20:58:06 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:13.554 20:58:06 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:13.554 20:58:06 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:38:13.554 20:58:06 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:38:13.554 20:58:06 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:38:13.554 20:58:06 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:38:13.554 20:58:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:13.554 20:58:06 -- common/autotest_common.sh@10 -- # set +x 00:38:13.554 20:58:06 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:38:13.554 20:58:06 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:38:13.554 20:58:06 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:38:13.554 20:58:06 -- common/autotest_common.sh@10 -- # set +x 00:38:20.119 INFO: APP EXITING 00:38:20.119 INFO: killing all VMs 00:38:20.119 INFO: killing vhost app 00:38:20.119 INFO: EXIT DONE 00:38:22.025 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:38:22.025 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:38:22.025 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:38:22.025 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:38:22.025 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:38:22.025 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:38:22.025 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:38:22.025 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:38:22.025 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:38:22.025 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:38:22.025 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:38:22.025 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:38:22.025 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:38:22.025 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:38:22.025 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:38:22.025 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:38:22.025 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:38:25.311 Cleaning 00:38:25.311 Removing: /var/run/dpdk/spdk0/config 00:38:25.311 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:25.311 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:25.311 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:25.311 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:25.311 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:25.311 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:25.311 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:25.311 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:25.311 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:25.311 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:25.311 Removing: /var/run/dpdk/spdk1/config 00:38:25.311 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:25.311 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:25.311 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:25.311 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:25.311 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:25.311 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:25.311 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:25.311 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:25.311 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:25.311 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:25.311 Removing: /var/run/dpdk/spdk2/config 00:38:25.311 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:25.311 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:25.311 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:25.311 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:25.311 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:25.311 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:25.311 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:25.311 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:25.311 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:25.311 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:25.311 Removing: /var/run/dpdk/spdk3/config 00:38:25.311 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:25.311 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:25.311 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:25.311 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:25.311 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:25.311 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:25.311 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:25.311 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:25.311 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:25.311 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:25.311 Removing: /var/run/dpdk/spdk4/config 00:38:25.311 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:25.311 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:25.311 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:25.311 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:25.311 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:25.311 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:25.311 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:25.311 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:25.311 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:25.311 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:25.311 Removing: /dev/shm/bdev_svc_trace.1 00:38:25.311 Removing: /dev/shm/nvmf_trace.0 00:38:25.311 Removing: /dev/shm/spdk_tgt_trace.pid153381 00:38:25.311 Removing: /var/run/dpdk/spdk0 00:38:25.311 Removing: /var/run/dpdk/spdk1 00:38:25.311 Removing: /var/run/dpdk/spdk2 00:38:25.311 Removing: /var/run/dpdk/spdk3 00:38:25.311 Removing: /var/run/dpdk/spdk4 00:38:25.311 Removing: /var/run/dpdk/spdk_pid150941 00:38:25.311 Removing: /var/run/dpdk/spdk_pid152172 00:38:25.311 Removing: /var/run/dpdk/spdk_pid153381 00:38:25.311 Removing: /var/run/dpdk/spdk_pid154078 00:38:25.311 Removing: /var/run/dpdk/spdk_pid155018 00:38:25.311 Removing: /var/run/dpdk/spdk_pid155270 00:38:25.311 Removing: /var/run/dpdk/spdk_pid156286 00:38:25.311 Removing: /var/run/dpdk/spdk_pid156549 00:38:25.311 Removing: /var/run/dpdk/spdk_pid156933 00:38:25.311 Removing: /var/run/dpdk/spdk_pid158607 00:38:25.311 Removing: /var/run/dpdk/spdk_pid159998 00:38:25.311 Removing: /var/run/dpdk/spdk_pid160403 00:38:25.311 Removing: /var/run/dpdk/spdk_pid160732 00:38:25.311 Removing: /var/run/dpdk/spdk_pid161076 00:38:25.311 Removing: /var/run/dpdk/spdk_pid161410 00:38:25.311 Removing: /var/run/dpdk/spdk_pid161693 00:38:25.311 Removing: /var/run/dpdk/spdk_pid161971 00:38:25.311 Removing: /var/run/dpdk/spdk_pid162291 00:38:25.311 Removing: /var/run/dpdk/spdk_pid163151 00:38:25.311 Removing: /var/run/dpdk/spdk_pid166596 00:38:25.311 Removing: /var/run/dpdk/spdk_pid166942 00:38:25.311 Removing: /var/run/dpdk/spdk_pid167228 00:38:25.311 Removing: /var/run/dpdk/spdk_pid167241 00:38:25.311 Removing: /var/run/dpdk/spdk_pid167796 00:38:25.311 Removing: /var/run/dpdk/spdk_pid167811 00:38:25.311 Removing: /var/run/dpdk/spdk_pid168749 00:38:25.311 Removing: /var/run/dpdk/spdk_pid169006 00:38:25.311 Removing: /var/run/dpdk/spdk_pid169301 00:38:25.311 Removing: /var/run/dpdk/spdk_pid169322 00:38:25.311 Removing: /var/run/dpdk/spdk_pid169610 00:38:25.311 Removing: /var/run/dpdk/spdk_pid169875 00:38:25.311 Removing: /var/run/dpdk/spdk_pid170259 00:38:25.311 Removing: /var/run/dpdk/spdk_pid170541 00:38:25.311 Removing: /var/run/dpdk/spdk_pid170866 00:38:25.311 Removing: /var/run/dpdk/spdk_pid174845 00:38:25.311 Removing: /var/run/dpdk/spdk_pid179348 00:38:25.311 Removing: /var/run/dpdk/spdk_pid190288 00:38:25.311 Removing: /var/run/dpdk/spdk_pid191060 00:38:25.311 Removing: /var/run/dpdk/spdk_pid195642 00:38:25.311 Removing: /var/run/dpdk/spdk_pid196015 00:38:25.311 Removing: /var/run/dpdk/spdk_pid200566 00:38:25.311 Removing: /var/run/dpdk/spdk_pid206900 00:38:25.311 Removing: /var/run/dpdk/spdk_pid209633 00:38:25.311 Removing: /var/run/dpdk/spdk_pid221177 00:38:25.311 Removing: /var/run/dpdk/spdk_pid230764 00:38:25.311 Removing: /var/run/dpdk/spdk_pid232607 00:38:25.311 Removing: /var/run/dpdk/spdk_pid233665 00:38:25.311 Removing: /var/run/dpdk/spdk_pid251856 00:38:25.311 Removing: /var/run/dpdk/spdk_pid256060 00:38:25.311 Removing: /var/run/dpdk/spdk_pid305425 00:38:25.311 Removing: /var/run/dpdk/spdk_pid311270 00:38:25.311 Removing: /var/run/dpdk/spdk_pid317354 00:38:25.311 Removing: /var/run/dpdk/spdk_pid324508 00:38:25.311 Removing: /var/run/dpdk/spdk_pid324510 00:38:25.311 Removing: /var/run/dpdk/spdk_pid325934 00:38:25.311 Removing: /var/run/dpdk/spdk_pid326731 00:38:25.311 Removing: /var/run/dpdk/spdk_pid327772 00:38:25.311 Removing: /var/run/dpdk/spdk_pid328308 00:38:25.311 Removing: /var/run/dpdk/spdk_pid328383 00:38:25.311 Removing: /var/run/dpdk/spdk_pid328670 00:38:25.311 Removing: /var/run/dpdk/spdk_pid328837 00:38:25.311 Removing: /var/run/dpdk/spdk_pid328842 00:38:25.311 Removing: /var/run/dpdk/spdk_pid329886 00:38:25.311 Removing: /var/run/dpdk/spdk_pid330714 00:38:25.570 Removing: /var/run/dpdk/spdk_pid331723 00:38:25.570 Removing: /var/run/dpdk/spdk_pid332259 00:38:25.570 Removing: /var/run/dpdk/spdk_pid332399 00:38:25.570 Removing: /var/run/dpdk/spdk_pid332731 00:38:25.570 Removing: /var/run/dpdk/spdk_pid333920 00:38:25.570 Removing: /var/run/dpdk/spdk_pid335038 00:38:25.570 Removing: /var/run/dpdk/spdk_pid343980 00:38:25.570 Removing: /var/run/dpdk/spdk_pid374298 00:38:25.570 Removing: /var/run/dpdk/spdk_pid379035 00:38:25.570 Removing: /var/run/dpdk/spdk_pid380868 00:38:25.570 Removing: /var/run/dpdk/spdk_pid382871 00:38:25.570 Removing: /var/run/dpdk/spdk_pid382970 00:38:25.570 Removing: /var/run/dpdk/spdk_pid383220 00:38:25.570 Removing: /var/run/dpdk/spdk_pid383256 00:38:25.570 Removing: /var/run/dpdk/spdk_pid383814 00:38:25.571 Removing: /var/run/dpdk/spdk_pid385916 00:38:25.571 Removing: /var/run/dpdk/spdk_pid386780 00:38:25.571 Removing: /var/run/dpdk/spdk_pid387339 00:38:25.571 Removing: /var/run/dpdk/spdk_pid389737 00:38:25.571 Removing: /var/run/dpdk/spdk_pid390536 00:38:25.571 Removing: /var/run/dpdk/spdk_pid391118 00:38:25.571 Removing: /var/run/dpdk/spdk_pid395448 00:38:25.571 Removing: /var/run/dpdk/spdk_pid401362 00:38:25.571 Removing: /var/run/dpdk/spdk_pid401363 00:38:25.571 Removing: /var/run/dpdk/spdk_pid401364 00:38:25.571 Removing: /var/run/dpdk/spdk_pid405950 00:38:25.571 Removing: /var/run/dpdk/spdk_pid415027 00:38:25.571 Removing: /var/run/dpdk/spdk_pid419362 00:38:25.571 Removing: /var/run/dpdk/spdk_pid425885 00:38:25.571 Removing: /var/run/dpdk/spdk_pid427301 00:38:25.571 Removing: /var/run/dpdk/spdk_pid428862 00:38:25.571 Removing: /var/run/dpdk/spdk_pid430488 00:38:25.571 Removing: /var/run/dpdk/spdk_pid435392 00:38:25.571 Removing: /var/run/dpdk/spdk_pid440095 00:38:25.571 Removing: /var/run/dpdk/spdk_pid444303 00:38:25.571 Removing: /var/run/dpdk/spdk_pid452206 00:38:25.571 Removing: /var/run/dpdk/spdk_pid452208 00:38:25.571 Removing: /var/run/dpdk/spdk_pid457407 00:38:25.571 Removing: /var/run/dpdk/spdk_pid457672 00:38:25.571 Removing: /var/run/dpdk/spdk_pid457821 00:38:25.571 Removing: /var/run/dpdk/spdk_pid458592 00:38:25.571 Removing: /var/run/dpdk/spdk_pid458613 00:38:25.571 Removing: /var/run/dpdk/spdk_pid463409 00:38:25.571 Removing: /var/run/dpdk/spdk_pid464066 00:38:25.571 Removing: /var/run/dpdk/spdk_pid468721 00:38:25.571 Removing: /var/run/dpdk/spdk_pid471609 00:38:25.571 Removing: /var/run/dpdk/spdk_pid477349 00:38:25.571 Removing: /var/run/dpdk/spdk_pid482979 00:38:25.571 Removing: /var/run/dpdk/spdk_pid492162 00:38:25.571 Removing: /var/run/dpdk/spdk_pid499836 00:38:25.571 Removing: /var/run/dpdk/spdk_pid499892 00:38:25.571 Removing: /var/run/dpdk/spdk_pid520330 00:38:25.571 Removing: /var/run/dpdk/spdk_pid520865 00:38:25.571 Removing: /var/run/dpdk/spdk_pid521483 00:38:25.571 Removing: /var/run/dpdk/spdk_pid522106 00:38:25.571 Removing: /var/run/dpdk/spdk_pid522778 00:38:25.571 Removing: /var/run/dpdk/spdk_pid523546 00:38:25.571 Removing: /var/run/dpdk/spdk_pid524099 00:38:25.571 Removing: /var/run/dpdk/spdk_pid524643 00:38:25.571 Removing: /var/run/dpdk/spdk_pid529081 00:38:25.571 Removing: /var/run/dpdk/spdk_pid529378 00:38:25.571 Removing: /var/run/dpdk/spdk_pid535717 00:38:25.571 Removing: /var/run/dpdk/spdk_pid535879 00:38:25.571 Removing: /var/run/dpdk/spdk_pid541563 00:38:25.571 Removing: /var/run/dpdk/spdk_pid545923 00:38:25.571 Removing: /var/run/dpdk/spdk_pid556912 00:38:25.571 Removing: /var/run/dpdk/spdk_pid557443 00:38:25.571 Removing: /var/run/dpdk/spdk_pid561998 00:38:25.571 Removing: /var/run/dpdk/spdk_pid562281 00:38:25.571 Removing: /var/run/dpdk/spdk_pid566723 00:38:25.830 Removing: /var/run/dpdk/spdk_pid572703 00:38:25.830 Removing: /var/run/dpdk/spdk_pid575410 00:38:25.830 Removing: /var/run/dpdk/spdk_pid586125 00:38:25.830 Removing: /var/run/dpdk/spdk_pid595197 00:38:25.830 Removing: /var/run/dpdk/spdk_pid597027 00:38:25.830 Removing: /var/run/dpdk/spdk_pid598000 00:38:25.830 Removing: /var/run/dpdk/spdk_pid615585 00:38:25.830 Removing: /var/run/dpdk/spdk_pid619626 00:38:25.830 Removing: /var/run/dpdk/spdk_pid622609 00:38:25.830 Removing: /var/run/dpdk/spdk_pid631046 00:38:25.830 Removing: /var/run/dpdk/spdk_pid631151 00:38:25.830 Removing: /var/run/dpdk/spdk_pid636523 00:38:25.830 Removing: /var/run/dpdk/spdk_pid638536 00:38:25.830 Removing: /var/run/dpdk/spdk_pid640719 00:38:25.830 Removing: /var/run/dpdk/spdk_pid641964 00:38:25.830 Removing: /var/run/dpdk/spdk_pid643975 00:38:25.830 Removing: /var/run/dpdk/spdk_pid645407 00:38:25.830 Removing: /var/run/dpdk/spdk_pid655119 00:38:25.830 Removing: /var/run/dpdk/spdk_pid655647 00:38:25.830 Removing: /var/run/dpdk/spdk_pid656170 00:38:25.830 Removing: /var/run/dpdk/spdk_pid658631 00:38:25.830 Removing: /var/run/dpdk/spdk_pid659161 00:38:25.830 Removing: /var/run/dpdk/spdk_pid659696 00:38:25.830 Removing: /var/run/dpdk/spdk_pid663812 00:38:25.830 Removing: /var/run/dpdk/spdk_pid663828 00:38:25.830 Removing: /var/run/dpdk/spdk_pid665532 00:38:25.830 Removing: /var/run/dpdk/spdk_pid665903 00:38:25.830 Removing: /var/run/dpdk/spdk_pid666085 00:38:25.830 Clean 00:38:25.830 20:58:19 -- common/autotest_common.sh@1453 -- # return 0 00:38:25.830 20:58:19 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:38:25.830 20:58:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:25.830 20:58:19 -- common/autotest_common.sh@10 -- # set +x 00:38:25.830 20:58:19 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:38:25.830 20:58:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:25.830 20:58:19 -- common/autotest_common.sh@10 -- # set +x 00:38:25.830 20:58:19 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:26.089 20:58:19 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:38:26.089 20:58:19 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:38:26.089 20:58:19 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:38:26.089 20:58:19 -- spdk/autotest.sh@398 -- # hostname 00:38:26.089 20:58:19 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-16 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:38:26.089 geninfo: WARNING: invalid characters removed from testname! 00:38:48.043 20:58:38 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:48.044 20:58:41 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:49.421 20:58:42 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:51.323 20:58:44 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:52.699 20:58:46 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:54.604 20:58:47 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:56.508 20:58:49 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:56.508 20:58:49 -- spdk/autorun.sh@1 -- $ timing_finish 00:38:56.508 20:58:49 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:38:56.508 20:58:49 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:56.508 20:58:49 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:38:56.508 20:58:49 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:56.508 + [[ -n 71415 ]] 00:38:56.508 + sudo kill 71415 00:38:56.519 [Pipeline] } 00:38:56.533 [Pipeline] // stage 00:38:56.537 [Pipeline] } 00:38:56.550 [Pipeline] // timeout 00:38:56.554 [Pipeline] } 00:38:56.566 [Pipeline] // catchError 00:38:56.571 [Pipeline] } 00:38:56.586 [Pipeline] // wrap 00:38:56.595 [Pipeline] } 00:38:56.608 [Pipeline] // catchError 00:38:56.615 [Pipeline] stage 00:38:56.617 [Pipeline] { (Epilogue) 00:38:56.627 [Pipeline] catchError 00:38:56.628 [Pipeline] { 00:38:56.638 [Pipeline] echo 00:38:56.640 Cleanup processes 00:38:56.647 [Pipeline] sh 00:38:56.937 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:56.937 677252 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:56.952 [Pipeline] sh 00:38:57.241 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:57.241 ++ grep -v 'sudo pgrep' 00:38:57.241 ++ awk '{print $1}' 00:38:57.241 + sudo kill -9 00:38:57.241 + true 00:38:57.252 [Pipeline] sh 00:38:57.538 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:07.539 [Pipeline] sh 00:39:07.921 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:07.921 Artifacts sizes are good 00:39:07.971 [Pipeline] archiveArtifacts 00:39:07.990 Archiving artifacts 00:39:08.400 [Pipeline] sh 00:39:08.687 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:39:08.719 [Pipeline] cleanWs 00:39:08.730 [WS-CLEANUP] Deleting project workspace... 00:39:08.730 [WS-CLEANUP] Deferred wipeout is used... 00:39:08.736 [WS-CLEANUP] done 00:39:08.739 [Pipeline] } 00:39:08.756 [Pipeline] // catchError 00:39:08.769 [Pipeline] sh 00:39:09.052 + logger -p user.info -t JENKINS-CI 00:39:09.062 [Pipeline] } 00:39:09.076 [Pipeline] // stage 00:39:09.081 [Pipeline] } 00:39:09.096 [Pipeline] // node 00:39:09.102 [Pipeline] End of Pipeline 00:39:09.140 Finished: SUCCESS